Skip to main content
Category

Node+JS Interactive

Node.js in a Kubernetes world

By Blog, Node.js, Node+JS Interactive, tutorial

7 basic tasks Node.js developers need to understand about Kubernetes

This post was written by Michael Dawson, OpenJS Foundation Board Member and Node.js community lead at IBM. This first appeared on IBM Developer.

Kubernetes is fast becoming the leader for deploying and managing production applications, including those written in Node.js. Kubernetes is complex, though, and learning the ins and outs of the technology can be difficult, even for a seasoned developer.

Kubernetes logo

Node.js application developers may not need to manage Kubernetes deployments in our day-to-day jobs or be experts in the technology, but we must consider Kubernetes when developing applications.

As a reminder, Docker and Kubernetes are the foundation of most modern clouds, including IBM CloudCloud native development refers to creating applications that you will deploy in Docker or Kubernetes, and cloud-native applications often conform to the tenants of a 12 Factor application.

Kubernetes logo

In this article, I try to answer the question: “As a Node.js devleoper, what do I need to know about Kubernetes?” I cover a number of key tasks you must complete when building your Node.js application in a cloud-native manner, including:

  • Building Docker images
  • Deploying containerized applications
  • Testing your applications
  • Health checking
  • Logging
  • Gathering metrics
  • Upgrading and maintaining containerized applications

Building Docker images

Deploying to Docker or Kubernetes requires that you build a Docker image. To do that, you typically start with an existing image and layer in the additional components that you need.

When building Node.js applications, the community provides a number of official Docker images that you can start with. These images add the Node.js binaries to an existing Linux distribution (Debian or Alpine) and offer a way to bundle your application into the image so that it runs when you start the Docker container.

Each time the project publishes a new release, it creates new Docker images. For example, when 12.13.1 was released, the new Docker image “node:12.13.1” was made available.

There are three variants of the Node.js images:

  1. Debian-based images with the core components needed to build and test Node.js applications.
  2. Slim images which are Debian-based images with only the minimal packages needed to run a Node.js application after it is already built.
  3. Alpine-based images for those who need the smallest container size.

There are a number of tags that you can use to specify the image you want. See Docker Hub for the full list.

When choosing your base image, keep in mind:

  • Alpine binaries are not built as part of the main Node.js releases. This means that they are built from source and there is more risk of regressions.
  • You should likely use both the standard Debian image along with the Slim image in a multistage build in order to achieve a smaller resulting image size. More on that in the next section

Build your image

Once you’ve chosen your base image, the next step is to actually build the image which bundles in your application. This is done through a Dockerfile. The simplest Dockerfile is as follows:

FROM node : 12.13.1
EXPOSE 3000
COPY server.js .
CMD node server.js

This Dockerfile simply copies your application code (server.js) into the official image with the node:12.13.1 binaries and indicates that server.js should be run when the image is started. Note that you did not have to install Node.js as it is already installed and in the default path as part of the node:12.13.1 base image.

To build an image from this file, put the file into a directory with your application code and then from that directory run docker build . -t test1:new. This builds a docker image named test1, and with the new tag, based on the contents of the current directory.

While this sounds straightforward, Dockerfiles can quickly become complicated because you want to build the image in a way that only includes the minimum components needed to run your application. This should exclude intermediate artifacts, for example the individual object files created when native addons are compiled. To exclude intermediate artifacts, a multistage build is commonly used where artifacts are generated in one step or image and then copied out to another image in a later stage. Read more about this in the Docker documentation.

Once you have built the image, you need to push it to a registry and then you can run and test your application under Docker or Kubernetes. It’s best to test in as close an environment to the production deployment as possible. Today, that often means Kubernetes.

Deploying your application to Kubernetes

In order to deploy and test your image in Kubernetes you use one or more deployment artifacts. At the lowest level, much of the Kubernetes configuration is through YAML. To deploy and access your application through a port, you need three YAML files.

To start, you need a deployment which specifies which image to use (the one you pushed to the registry earlier). For example:

apiVersion: apps/v1
kind: Deployment
metadata:
    name: test1
    labels: 
        app: test1
spec:
    replicas: 2
    selector:
        matchLabels:
            app: test1
    template:
        metadata:
            labels:
                app: test1
        spec:
            containers:
            - name: test1
               image: test1:new
               imagePullPolicy: Never
               args:

In this case, it’s specifying through the image: entry that it should use the Docker image named test1 with the new tag (which matches what we built in the earlier step). You would use the following command to deploy to Kubernetes: kubectl apply -f deployment.yaml.

Similarly, you’d need to deploy a service to provide a way to get to the running containers that make up the deployment and an ingress to allow external access to that service:

kubectl apply -f service.yaml

kubectl apply -f ingress.yaml

I won’t go into the details here, but the key is that you need to understand and write YAML along with understanding the YAML for deployments, services, and ingress.

If this sounds a bit cumbersome, it is, and you are not the first one to think so. Helm charts were developed to bring together a deployment, service, and ingress in a way to simplify deployments.

The basic structure of a Helm chart is as follows:

Package-name/
  charts/
  templates/
  chart.yaml
  LICENSE
  README.md
  Requirements.yaml
  Values.yaml

The base Helm chart has templates which you can configure to avoid having to write your own deployment, service, and ingress YAML files. If you have not already written those files, it is probably easier to use that configuration option. You can, however, just copy your files into the template directory and then deploy the application with helm install dirname, where dirname is the name of the directory in which the chart files are located.

Helm is billed as the “Package Mana ger for Helm” and does make it easier to deploy an application configured through multiple YAML files and possibly multiple components. There are, however, shortcomings, including concerns over security and relying on only static configuration versus being able to have programmatic control. Operators look to address these issues. Read more about them here: https://github.com/operator-framework.

Iterative testing

After you figure out how to deploy your application, you need to test your application. The development and test process is iterative, so you probably need to rebuild, deploy, test a number of times. Check out our article on test-driven development.

If you are testing in Docker, you need to build a new image, start a container, and then access the image during each iteration. If you are testing in Kubernetes, you have to delete pods, redeploy, and so forth. Even when using Helm charts, this will take a number of commands and time for things to spin up. The key challenge is that this all takes time and that can slow down your development process.

So far, we’ve talked about building and testing an image for your application, but not what’s in your application itself. I can’t talk about your specific application, but let’s look at what you should build into your application in order to support a cloud-native deployment.

Health checking your applications

The first thing you need to build into your application is support for liveness and readiness endpoints. Kubernetes has built-in functionality to check these endpoints:

  • Liveness – restart the container when the liveness endpoint does not respond indicating the application is alive.
  • Readiness – defer sending traffic until application is ready to accept traffic, or step sending traffic if application is no longer able to accept traffic.

Kubernetes supports three probe types.

  • Running a command (through shell in the container)
  • HTTP probe
  • TCP probe

As a Node.js developer, you will probably use an HTTP probe. A response code of >=200 and less than 400 means everything is okay.

You configure the probes through the deployment YAML. For example:

apiVersion: v1
kind: Pod
metadata:
     name: test1
     labels:
          apps: test1
spec:
    containers:
     - name: test1
        image: test1:latest
        imagePullPolicy: Never
        args:
        - /server
        livenessProbe:
          httpGet:
             path: /live
             port: 3000
           initialDelaySeconds: 5
           periodSeconds: 10
        readinessProbe:
           httpGet:
              path: /live
              port: 3000
            initialDelaySeconds: 1
            periodSeconds: 10

In this code listings, I added a liveness prob on port 3000 using the /live path.

Additionally:

  • initialDelaySeconds controls how quickly to start checking (in order to allow enough time for the application to start up).
  • periodSeconds controls how often to check.
  • timeoutSeconds is 1 by default, so if your probe can take longer than that you’ll have to set that value as well.
  • failureThreshold controls how many times the probe needs to fail before the container will restart. You want to ensure that you don’t get into a loop where the application continues to restart and never has long enough to become ready to serve the probes.

Similarly, I added a readinessProbe on port 3000 using the /ready path. To test this out, I created an application that only responds successfully to the readinessProbe after 10 seconds. If I query the pods after deploying the application the output is:

User10minikube1:~/test$ kubectl get all
NAME           READY       STATUS        RESTARTS        AGE
pod/test1      0/1         Running       0               6s

Which shows 0/1 under READY indicating that the application is not yet ready to accept traffic. After 10 seconds, it should show 1/1, and Kubernetes will start routing traffic to the pod. This is particularly important when we scale up or down as we don’t want to route traffic to a new container until it is ready or else it will slow responses even though another container is up and already capable of responding quickly.

The readiness and liveness probes may be as simple as:

app.get('/readiness', (req, res) => {

  res.status(200).send();

});

app.get('/liveness', (req, res) => {

  res.status(200).send();

});

However, you will often need to check the liveness and readiness of other components within your application. This can get complicated if you have a lot of subcomponents. If you have a lot of subcomponents to check, I recommend using a package like CloudNativeJS/cloud-health that allows you to register additional checks.

Logging

Node.js developers also need to know how to do logging in a cloud-native environment. In container development, writing logs out to disk does not generally make sense because of the extra steps needed to make the logs available outside the container—and they will be lost once the container is stopped.

Logging to standard out (stdout) is the Cloud native way, and structured logging (for example using JSON) is the current trend. One of my current favorite modules is pino which is a fast, structured logger and is easy to use.

Kubernetes has support for watching standard out, so it’s easy to get the log output, for example you can use kubectl logs -f <name of pod> where <name of pod> is the name of a specific pod as follows:

user1@minikube1:~/test$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
test1-7bbc4449b4-98w2j   1/1     Running   0          30s
test1-7bbc4449b4-zqx6f   1/1     Running   0          30s
user1@minikube1:~/test$ kubectl logs -f test1-7bbc4449b4-98w2j
{"level":30,"time":1584451728646,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451734486,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"ready","v":1}
{"level":30,"time":1584451738647,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451748643,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451758647,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451768643,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":50,"time":1584451768647,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"down","v":1}
{"level":30,"time":1584451778648,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451788643,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}

Cloud deployment environments like the IBM Cloud also have more sophisticated options for collecting and aggregating logs across containers, including LogDNA or the elastic stack Elasticsearch, Logstash, and Kibana.

Gathering metrics

In a Kubernetes deployment your application is running in containers, there may be multiple copies of each container, and it is not necessarily easy to find or access those containers in order to gather information about how your application is running. For this reason, it is important to export the key metrics from your container that you need to understand and track the health of your application.

Prometheus is the defacto standard on this front. It defines a set of metrics that you should export and gives you the ability to add additional application-specific metrics that are important to your business.

As an example, the following shows an example of the metrics returned using one of the available Prometheus packages:

# HELP process_cpu_user_seconds_total Total user CPU time spent in seconds.
# TYPE process_cpu_user_seconds_total counter
process_cpu_user_seconds_total 1.7980069999999984 1585160007722

# HELP process_cpu_system_seconds_total Total system CPU time spent in seconds.
# TYPE process_cpu_system_seconds_total counter
process_cpu_system_seconds_total 0.931571000000001 1585160007722

# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 2.7295780000000067 1585160007722

# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1584451714

# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 17645568 1585160007723

# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 881635328 1585160007723

# HELP process_heap_bytes Process heap size in bytes.
# TYPE process_heap_bytes gauge
process_heap_bytes 90226688 1585160007723

# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 20 1585160007723

# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 201354

# HELP nodejs_eventloop_lag_seconds Lag of event loop in seconds.
# TYPE nodejs_eventloop_lag_seconds gauge
nodejs_eventloop_lag_seconds 0.000200848 1585160007723

# HELP nodejs_active_handles Number of active libuv handles grouped by handle type. Every handle type is C++ class name.
# TYPE nodejs_active_handles gauge
nodejs_active_handles{type="Socket"} 2 1585160007722
nodejs_active_handles{type="Server"} 1 1585160007722

# HELP nodejs_active_handles_total Total number of active handles.
# TYPE nodejs_active_handles_total gauge
nodejs_active_handles_total 3 1585160007722

# HELP nodejs_active_requests Number of active libuv requests grouped by request type. Every request type is C++ class name.
# TYPE nodejs_active_requests gauge
nodejs_active_requests{type="FSReqCallback"} 2

# HELP nodejs_active_requests_total Total number of active requests.
# TYPE nodejs_active_requests_total gauge
nodejs_active_requests_total 2 1585160007722

# HELP nodejs_heap_size_total_bytes Process heap size from node.js in bytes.
# TYPE nodejs_heap_size_total_bytes gauge
nodejs_heap_size_total_bytes 5971968 1585160007723

# HELP nodejs_heap_size_used_bytes Process heap size used from node.js in bytes.
# TYPE nodejs_heap_size_used_bytes gauge
nodejs_heap_size_used_bytes 4394216 1585160007723

# HELP nodejs_external_memory_bytes Nodejs external memory size in bytes.
# TYPE nodejs_external_memory_bytes gauge
nodejs_external_memory_bytes 3036410 1585160007723

# HELP nodejs_heap_space_size_total_bytes Process heap space size total from node.js in bytes.
# TYPE nodejs_heap_space_size_total_bytes gauge
nodejs_heap_space_size_total_bytes{space="read_only"} 262144 1585160007723
nodejs_heap_space_size_total_bytes{space="new"} 1048576 1585160007723
nodejs_heap_space_size_total_bytes{space="old"} 3256320 1585160007723
nodejs_heap_space_size_total_bytes{space="code"} 425984 1585160007723
nodejs_heap_space_size_total_bytes{space="map"} 528384 1585160007723
nodejs_heap_space_size_total_bytes{space="large_object"} 401408 1585160007723
nodejs_heap_space_size_total_bytes{space="code_large_object"} 49152 1585160007723
nodejs_heap_space_size_total_bytes{space="new_large_object"} 0 1585160007723

# HELP nodejs_heap_space_size_used_bytes Process heap space size used from node.js in bytes.
# TYPE nodejs_heap_space_size_used_bytes gauge
nodejs_heap_space_size_used_bytes{space="read_only"} 32296 1585160007723
nodejs_heap_space_size_used_bytes{space="new"} 419112 1585160007723
nodejs_heap_space_size_used_bytes{space="old"} 2852360 1585160007723
nodejs_heap_space_size_used_bytes{space="code"} 386368 1585160007723
nodejs_heap_space_size_used_bytes{space="map"} 308800 1585160007723
nodejs_heap_space_size_used_bytes{space="large_object"} 393272 1585160007723
nodejs_heap_space_size_used_bytes{space="code_large_object"} 3552 1585160007723
nodejs_heap_space_size_used_bytes{space="new_large_object"} 0 1585160007723

# HELP nodejs_heap_space_size_available_bytes Process heap space size available from node.js in bytes.
# TYPE nodejs_heap_space_size_available_bytes gauge
nodejs_heap_space_size_available_bytes{space="read_only"} 229576 1585160007723
nodejs_heap_space_size_available_bytes{space="new"} 628376 1585160007723
nodejs_heap_space_size_available_bytes{space="old"} 319880 1585160007723
nodejs_heap_space_size_available_bytes{space="code"} 3104 1585160007723
nodejs_heap_space_size_available_bytes{space="map"} 217168 1585160007723
nodejs_heap_space_size_available_bytes{space="large_object"} 0 1585160007723
nodejs_heap_space_size_available_bytes{space="code_large_object"} 0 1585160007723
nodejs_heap_space_size_available_bytes{space="new_large_object"} 1047488 1585160007723

# HELP nodejs_version_info Node.js version info.
# TYPE nodejs_version_info gauge
nodejs_version_info{version="v12.13.1",major="12",minor="13",patch="1"} 1

There are Node.js packages that can help you export the recommended set of metrics and the ones you need that are specific to your application. prom-client is my current favorite for integrating Prometheus metrics into a Node.js application.

Kubernetes also generates Prometheus metrics, and Kubernetes distributions or cloud platforms often make it easy to scrape your requested endpoints, aggregate them, and then graph the resulting data. Even if you install Kubernetes locally, there are Helm charts that make it easy to do this. The end result is that if you export Prometheus metrics from your application it will generally be quite easy for you or your operations teams to aggregate, chart, and alert on this data.

Maintaining and upgrading your applications

As with all application development, you have to think about maintaining and upgrading your application. Kubernetes helps you update your application deployment by rolling out updated containers, but it does not help you update the contents of the containers you built.

You have to have a plan for how to maintain and upgrade each component you use to build your container or that you add into your container. These range from docker images, Dockerfiles, live and readiness endpoints, logging, and metrics.

If you have X projects with Y containers, you can end up with a big number of container images that you must maintain. When you decide to update a dependency (for example the logger or Prometheus package etc.), you have to figure out which images need to be updated, what versions they are already using, and the like.

What happens if you move to a new team? You’ll need to figure out what their docker file look like, how the deployment artifacts are organized, what endpoint names are used for liveness and readiness, which logger they are using, and more just to get started.

I don’t know about you, but at this point I start to think “enough is enough” — I just want to focus on the application code —. Do I really need to think about Docker, Dockerfiles, YAML and all this stuff that’s outside of my application itself?

Separating concerns

What I’d really like is some separation of concerns and tooling to help me build/test efficiently using best practices agreed on by my organization. Something along these lines:

Kubernetes output

The two main components are:

  • a common stack agreed on by developers, architects, and operations which is re-used across projects within my organization
  • the application and additional packages that I need for the applications that I’m working on

For me, having consistency in the areas that don’t affect the “secret sauce” for the application I’m working on brings great benefits in ongoing maintenance and upgrades and even moving across teams. I look at this as Consistency where it makes sense as opposed to uniformity.

Tools to help with consistency

There are a few tools that can help isolate the tasks that you don’t need to worry about so that you can just focus on your code. I’ll introduce three new ones that I think can help you succeed.

Appsody

The Appsody open source project attempts to separate the operations aspects of application development to allow developers to focus on their application code, while at the same time building on a common Stack(s) that can be shared across an organization.

Appsody helps in the following ways:

  • Builds a base stack that you can layer applications on top of. This include most of the base components I’ve mentioned above which are needed for cloud-native deployment.
  • Accelerates the iterative local development and test cycle.
  • Simplifies creating Docker images using best practices and deploying them to Kubernetes.

There are Node.js-specific stacks in Appsody, including

  • nodejs – base stack for any Node.js application
  • nodejs-express – base stack with express built in
  • nodejs-loopback – base stack with loopback built in
  • nodejs-functions – base stack where you can write functions in the Function as a service(FaaS) style

I’m excited about Appsody offering reusable stacks that can be shared across teams, and that allows architects and operators to update the stack separately from the application itself. This is a newish approach (hence, the incubator in the path for the stacks), so I won’t be surprised if there are tweaks to the approach or tools as people experiment and get more familiar with it.

It’s great to see an open source code base starting to form and allow organizations to start proving out the approach. The existing stacks integrate liveness, readiness endpoints, and export a Prometheus endpoint. At the same time, the base Appsody framework also provides commands for building and deploying containers to Kubernetes or testing locally in Docker. The result is that much of what you need to know or understand as a Node.js developer for a coud-native deployment is supported by the tooling.

While I still think it is good for you to have a basic understanding of Kubernetes and cloud-native development, these new tools ensure you won’t need to be an expert on the ins and outs of the deployment YAML files, multistage Docker files, and the like.

Codewind

If you like what you see in Appsody but prefer a graphical user interface, you might want to check out the open source Codewind project. It adds a nice GUI, has Appsody integration, and allows you to bring your own visual editor (for example VS Code). It also adds in performance monitoring and Tekton pipelines. I’m more of a command line vi person, but if you’re interested you can check out it out here.

IBM Cloud Pack for Applications

Finally, if you are sold on this approach and want to build on a supported stack from IBM, check out the IBM Cloud Pack for Applications which includes the stack- based approach as well as support for Node.js.

Final thoughts

I hope you’ve gained a more concrete understanding of what cloud-native development and Kubernetes means to you as a Node.js developer and hope I’ve piqued your interest in stack-based development and Appsody. I talked on this subject at the most recent Node+JS Interactive conference and if you’d like to watch the recording, check out the slides and watch the recording.

Behind the Scenes on New Professional Node Certification from OpenJS

By Blog, Certification, Event, Node.js, Node+JS Interactive
Screen Shot 2019-12-19 at 11.21.50 AM

David Clements (@davidmarkclem), Principal Architect of NearForm, and Adrian Estrada, (@edsadr), VP of NodeSource, gave a comprehensive overview of the new Node.js certifications at Node + JS Interactive, and we have the video!

Full video here 

In the talk, David and Adrian explore the reasons for the creation of the certification and why developers should get certified with OpenJS. They also go into detail around the guiding principles, how quality assurance has been implemented and what is being done to ensure long-term integrity. 

In the section on Inside the Certifications, they look at exam expectations, pricing, scholarships and an overview of the two certifications.

They ended with directions for the future of the certifications.

The industry is mature and there is more demand for Node skills than there is availability. OpenJS believes that the Node certifications create new opportunities, and are an excellent way to improve your resume and more to projects and jobs that are higher paying and more fulfilling.

Video by Section

It Takes a Community (:35)

Cross Organizational Effort (1:03)

Why (1:41)

Certification Committee (2:50)

Principles (3:20)

Quality Assurance (4:00)

Long-term Integrity Measures (4:45)

Inside the Certifications (5:20)

Exam Expectations (5:25)

Virtual Machine Environment (6:24)

Exams Overview (7:23)

  • OpenJS Node Application Developer
  • OpenJS Node Services Developer

Pricing (11:23)

Scholarships (12:48)

Regular Promotions (13:45)

Future (14:43)

More Information
OpenJS Node.js Services Developer (JSNSD)
The OpenJS Node.js Services Developer certification is for the Node.js developer with at least two years of experience creating RESTful servers and services with Node.js. It is designed for anyone looking to demonstrate competence in creating RESTful Node.js Servers and Services (or Microservices) with a particular emphasis on security practices.

OpenJS Node.js Application Developer (JSNAD)
The OpenJS Node.js Application Developer certification is ideal for the Node.js developer with at least two years of experience working with Node.js. It is designed for anyone looking to demonstrate competence with Node.js to create applications of any kind, with a focus on knowledge of Node.js core APIs.

OpenJS Foundation Year in Review

By Announcement, Blog, Node+JS Interactive

While only 10 months old, The OpenJS Foundation has had quite an exciting year, from merging two legacy foundations to bringing on new leadership, to accepting some fantastic new incubating projects, exciting doesn’t begin to describe it.

OpenJS Foundation Logo

We couldn’t ring in the new year without taking a walk down OpenJS Foundation memory lane and looking back at some amazing community milestones and moments. While we take this time to reflect on the big strides this community has made, we are also thrilled for what’s to come. Thanks to all who make the OpenJS Foundation all that it is! 

March 2019:

May 20, 2019

June 2019

July 2019

  • Michael Dawson, Node.js Community lead and Senior Software Developer at IBM and Kris Borchers, Senior Program Manager at Microsoft join the OpenJS Foundation Board of Directors
  • Joe Sepi, Developer Advocate and Software Engineer at IBM named first-ever CPC Chairperson.

August 2019

September 20, 2019

October 2019

November 20, 2019

December 12, 2019

Thanks again for a great year! Also, if you haven’t heard, we’ve announced dates for next year’s OpenJS Foundation Conference. We’ll be headed to Austin, TX June 23 and 24. Make plans now to join us! As always, stay connected through the channels available on our website.

Node+JS Interactive Day Two Recap

By Blog, Event, Node+JS Interactive

Node+JS Interactive Day Two
Day two of Node+JS Interactive was filled with incredible talks and break out sessions, workshops on certifications and keynotes highlighting new projects and key research and trends important to developers. We even had Nick Nisi from JS Party, an awesome podcaster from the JavaScript community, join us on-site for a live taping of the show!

Below are just a few highlights!

Breakouts
Marian from Pioneerasdev talks about her journey founding an amazing organization that helps women and girls in Colombia learn how to code and find tech jobs. Their group has skyrocketed in numbers going from 5 members to more than 1,200 in less than five years.

Members of the Node.js Technical Steering Committee spend some time talking about the health of the project, what’s to come, and where they could use some help.

Joe Sepi, IBM Software Engineer and Dev Advocate, as well as CPC Chair, gave his presentation on Promises API in Node.js Core to a packed room

Nick Nisi, a panelist at JS Party, is shown here, interviewing  Vladamir de Turckiem on Node.JS loader hooks. Nick also did a great job getting a bunch of folks on the show including Marian Villa, founder of Pioneerasdev, Rich Trott and Anna Henningsen on Node.js worker threads and Chris Wilcox and Jason Etcovitch on bots. 

Felix Rieseberg, Senior Staff Engineer at Slack and a member of the Electron outreach working group gave a talk on Electron and how to build cross-platform applications. 

Ben Morss and Kristofer Baxter, both Googlers, talk about productive Web development powered by AMP.

Keynotes
The afternoon keynotes kicked off with the wonderful Christian Bromann, Senior Lead Software Engineer at Sauce Labs and a Programming Committee Leader for Node+JS Interactive, as Master of Ceremony. In addition to being a great MC, he was a great partner in getting quality talks and keynotes selected.

Jory Burson moderates a panel with two new OpenJS Foundation incubation projects, AMP and Electron. John Kleinschmidt, Senior Software Engineer at Microsoft and Ben Morss, Developer Advocate from Google, talk through their respective projects and the benefits of joining the Foundation.

Kadir Topal from Mozilla delivered key results of the recently finalized MDN Developers needs assessment.

The keynotes were capped off with a panel on 2020 Tech Trends. The panel brought together developer advocates, industry experts and influencers within the media to discuss big topics inclusion security, monoculture in tech and diversity trends. Day one keynoter, Ellie Galloway, also got a much-deserved shout out! Panelists include moderate Nick Niki from JS Party, Liz Parody from NodeSource, Alex Williams from The New Stack and Chris Aniszczyk from the Linux Foundation.

This year’s event wouldn’t be possible without the generous support from our sponsors:

Google Cloud – Diamond
Microsoft Azure – Platinum
Heroku, IBM and Sentry – Gold
NearForm, Red Hat Openshift, Universite, de Montreal – Silver

Finally, we look forward to seeing everyone in Austin, TX  June 23 and 24th at the next global OpenJS Foundation conference!

Thanks to the amazing community for making our time together so worthwhile!

Node+JS Interactive Day One Recap

By Blog, Event, Node+JS Interactive

Day one at Node+JS Interactive has come to a close and was jam-packed! Today, more than 500 OpenJS Foundation Community members convened at the Montreal Convention Center. We had thought-provoking keynotes, welcomed a brand new project (Hey, Electron!), announced dates for next year’s conference (mark the calendar for June 23 and 24 in Austin, TX) networked during a buzz-worthy sponsor showcase and joined some amazing breakouts. 

Here are just some of the highlights in photos:

Electron joins the OpenJS Foundation

By Blog, Event, In The News, Node+JS Interactive, Project Update

The popular web framework for building desktop apps plays an important role in the adoption and development of JavaScript.

MONTREAL- December 11, 2019 – The OpenJS Foundation today announced the open source web framework Electron has been accepted into the Foundation’s incubation program. Electron, an open source framework created for building desktop apps using JavaScript, HTML, and CSS, is based on Node.js and Chromium. Additionally, it is widely used on many well-known applications including Discord, Microsoft Teams, OpenFin, Skype, Slack, Trello, Visual Studio Code, and many more

The OpenJS Foundation, which provides vendor-neutral support for sustained growth within the open source JavaScript community, delivered the news at the Foundation’s flagship event, Node+JS Interactive, in Montreal. 

“We’re heading into 2020 excited and honored by the trust the Electron project leaders have shown through this significant contribution to the new OpenJS Foundation,” said Robin Ginn, Executive Director of the OpenJS Foundation. “Electron is a powerful development tool used by some of the most well-known companies and applications. On behalf of the community, I look forward to working with Electron and seeing the amazing contributions they will make.” 

Electron’s cross-platform capabilities make it possible to build and run apps on Windows, Mac, and Linux computers. Initially developed by GitHub in 2013, today the framework is maintained by a number of developers and organizations. Electron is suited for anyone who wants to ship visually consistent, cross-platform applications, fast and efficiently. 

“We’re excited about Electron’s move to the OpenJS Foundation and we see this as the next step in our evolution as an open source project,” said Jacob Groundwater, Manager at ElectronJS and Principal Engineering Manager at Microsoft. “With the Foundation, we’ll continue on our mission to play a prominent role in the adoption of web technologies by desktop applications and provide a path for JavaScript to be a sustainable platform for desktop applications. This will enable the adoption and development of JavaScript in an environment that has traditionally been served by proprietary or platform-specific technologies.”

“We’re committed to open source and developer collaboration, and thrilled for Electron to be a part of the Foundation’s incubation program,” said Sarah Novotny, Partner PM Manager, Azure, Microsoft. “We look forward to further enhancing the open source project for contributors, maintainers, and developers building on the framework; while exposing the project to a broader audience.”

“Electron is a great example of how interconnected the JavaScript ecosystem can be. Built on Chromium and Node.js, Electron is an amazing tool that empowers developers to create great cross-platform desktop experiences,” said Myles Borins, OpenJS Foundation Board member and Developer Advocate at Google. “It’s extremely exciting to see this project join the Foundation and stepping towards a more open governance model.”

“The Cross Project Council is thrilled to bring Electron into the OpenJS Foundation community,” said Joe Sepi, Cross Project Council Chair, and Open Source Engineer & Advocate at IBM. “Collectively, we are building something sustainable for the long-term benefit of community members and end-users. We are excited to work with Electron, and to have them be part of our mission.”

“On behalf of the OpenJS Foundation Board of Directors, it’s my pleasure to welcome Electron as the newest incubating project to the Foundation,” said Todd Moore, OpenJS Foundation Board Chair and Vice President of Open Technology and Developer Advocacy at IBM. “Bringing Electron into the Foundation is a great way to cap 2019, and continue to build our momentum into next year.”

Representatives from Electron will be featured in both a keynote and breakout session at Node+JS Interactive. 

  • On December 12, at 11:20 am ET, Felix Rieseberg will present a breakout session titled “Electron: Desktop Apps with JavaScript,” and give a technical introduction to Electron. Building a small code editor live on stage, he’ll cover the basics and explain both benefits and challenges of using Node.js and JavaScript to build major desktop applications. (ADD LINK)

About OpenJS Foundation 

The OpenJS Foundation is committed to supporting the healthy growth of the JavaScript ecosystem and web technologies by providing a neutral organization to host and sustain projects, as well as collaboratively fund activities for the benefit of the community at large. The OpenJS Foundation is made up of 32 open source JavaScript projects including Appium, Dojo, jQuery, Node.js, and webpack and is supported by 30 corporate and end-user members, including GoDaddy, Google, IBM, Intel, Joyent, and Microsoft. These members recognize the interconnected nature of the JavaScript ecosystem and the importance of providing a central home for projects which represent significant shared value.

Google at Node+JS Interactive 2019

By Blog, Event, Node+JS Interactive

Google Cloud is extremely excited about our fourth annual sponsorship of the Node+JS Interactive Conference. 2019 marks our largest engagement yet and we have a big group of Googlers who can’t wait to get to Montreal! Folks representing everything from Amp, GCP, Google Open Source Program Office, Security, and TensorFlow.js will be available to chat in our lounge area. 

Google at last year’s Node+JS Interactive Event in Vancouver.

Sessions

There are also a number of Googler run sessions that you can attend, we hope to see you there!

Wednesday December 11:

2:20 pm “Securing the DOM from the Bottom Up” with Krzysztof Kotowicz

3:40 pm “Extra Special Modules” with Myles Borins

5:20 pm “Oh No! The Robots Have Taken Over” with Christopher Wilcox

Thursday December 12:

10:20 am “Rethinking JavaScript Test Coverage” with Benjamin Coe

12:00 pm “TensorFlow.js – Bringing ML and Linear Algebra to Node.js” with Sandeep Gupta and Kangyi Zhang

2:00 pm “Work Less and Do More: Google Sheets for JavaScript Developers” with Leah Cole and Franziska Hinkelmann

Come to the Google Cloud Lounge for demos and codelabs

Visit our lounge area throughout the event to meet folks who can answer your questions and show demos of our various technologies. We’ll also be running codelabs for hands on learning supported by Google experts. Want to deploy Node.js to Kubernetes? We got ya! Time to write your first service worker? Got that too! Audio recognition with TensorFlow.js? Why not!!!?

Swagless in 2019

As part of our commitment to community development and the environment, Google Cloud has chosen to go swagless this year. In lieu of swag, we are thrilled to support the work being done by TechAide Montreal. TechAide Montreal’s mission is to unite people from diverse backgrounds and life stories and to bring the tech community together to give back to Centraide and help break the cycle of poverty and social exclusion in Greater Montreal.

Don’t worry, there will still be stickers.

Meet the experts

We’ll be running private 1:1’s with Googlers on site. If you are interested please fill out this Google Form and we’ll get back to you when your session has been scheduled.