Category

Blog

Node.js Certifications update: Node.js 10 to Node.js 12

By Announcement, Blog, Certification

The OpenJS Node.js Application Developer (JSNAD) and the OpenJS Node.js Services Developer (JSNSD) Exams will be updated from Node.js version 10, which is now in maintenance, to Node.js version 12, which is the current LTS (Long Term Support) line. Changes will come into effect June 16, 2020. All tests taking place after 8:00 am PT on June 16, 2020 will be based on Node.js version 12.

These exams are evergreen and soon after a Node.js version becomes the only LTS line the certifications are updated to stay in lockstep with that LTS version. Now that Node.js version 10 has moved into maintenance, certifications will be based on Node.js version 12. 

While there are no changes to the current set of Domains and Competencies for the JSNAD and JSNAD Exams, candidates are advised to review functionality of libraries or frameworks on Node.js version 12. For a full list of differences between Node.js version 10 and Node.js version 12 see https://nodejs.org/ca/blog/uncategorized/10-lts-to-12-lts/.

OpenJS Foundation welcomes two new board members from Google and Joyent

By Announcement, Blog

The OpenJS Foundation family is growing! We are happy to welcome two new members to the Board: Sonal Bhoraniya from Google and Sean Johnson from Joyent to its Board of Directors.

Sonal Bhoraniya is an attorney on Google’s Open Source Compliance Team, ensuring compliance with open source policies and adherence to the collaborative, open source culture.  Sonal believes in the value of maintaining and contributing to open source communities. In addition to her legal acumen, Sonal is a JavaScript/ Node.js developer. Sonal earned her Bachelor’s in Science from Duke and her JD from Vanderbilt. In her role on the Board, Sonal is excited to support and help ensure the healthy and sustainable growth of the OpenJS community.

Sean Johnson, joining from Joyent, leads Joyent’s Commercial Group, a subsidiary of Samsung, covering a variety of diverse open source projects, products, and services. Sean is an OSS-first product leader and advocate for vibrant and productive open source communities. In his role as a member of the board, Sean hopes to accelerate social and community equity in the OpenJS ecosystem through enablement, collaboration and shared values. Sean earned a Bachelor of Arts from Vanderbilt.

The OpenJS Foundation is looking forward to the contributions of both Sonal and Sean and honored to have them serve as Platinum Directors. 

OpenJS Foundation Joins Open Source Initiative as Newest Affiliate Member

By Announcement, Blog

Membership emphasizes growing outreach and engagement with broader software and technology communities.

OpenJS Foundation Logo

PALO ALTO, Calif., June 11, 2020 — The Open Source Initiative® (OSI), the international authority in open source licensing, is excited to announce the affiliate membership of the OpenJS Foundation, the premier home for critical open source JavaScript projects, including Appium, Dojo, jQuery, Node.js, and webpack, and 30 more. The OpenJS membership with the OSI highlights the incredible impact of JavaScript across all industries, web technologies, communities, and, ultimately, the open source software movement.

“The OpenJS Foundation is thrilled to join OSI as an Affiliate Member and we’re proud to have Myles Borins represent our JavaScript communities, ” said Robin Ginn, OpenJS Foundation Executive Director. “In addition to all of our projects using OSI-approved licenses, our neutral organization shares common principles with the OSI, including technical governance and accountability. As an Affiliate Member, we can better advance open source development methodologies for individual open source projects and the ecosystem as a whole.” 

Formed through a merger of the JS Foundation and Node.js Foundation in 2019, the OpenJS Foundation supports the healthy growth of JavaScript and web technologies by providing a neutral organization to host and sustain projects, as well as collaboratively fund activities that benefit the ecosystem as a whole. Originally developed in 1995, JavaScript is now arguably the most widely used programming language, with Github reporting it as the “Top Language” from 2014 through 2019 in their State of the Octoverse report. Javascript’s growth and popularity can be attributed to its accessibility, often identified as a language for new developers, and its applicability, a core component of today’s web-driven technology landscape. JavaScript also serves as a conduit to, and proof of concept for, open source software development, projects, and communities. For some, JavaScript provides their first experience in open source development and communities, and for others, their experience in JavaScript projects and communities are helping to lead and further refine the larger open source movement itself.

The OpenJS Foundation serves as a valuable resource for both new JavaScript developers and emerging projects–offering a foundation for support and growth–as well as veterans with broad experience with mature projects–providing a platform to extend best practices and guiding principles out to the broader open source software community.

Working with the OpenJS Foundation provides the Open Source Initiative a unique opportunity to engage with one of the open source software movement’s largest and most influential communities. JavaScript developers and projects are deeply committed to open source as a development model and its ethos of co-creation through collaboration and contribution, making OpenJS Foundations affiliate membership and the community they represent a critical partnership for not only open source’s continued growth and development but the OSI as well.

 “We are thrilled to welcome aboard OpenJS as an OSI Affiliate Member, ” said Tracy Hinds, Chief Financial Officer of OSI. “It is a time in open source where it’s vital to learn from and be challenged by the growing concerns about sustainability. We look to OpenJS as a great partner in iterating over the questions to be asking in how projects are building, maintaining, and sustaining open source software.”

The OSI Affiliate Member Program, available at no-cost, allows non-profit organizations to join and support the OSI’s work to promote and protect open source software. Affiliate members participate directly in the direction and development of the OSI through board elections and incubator projects that support software freedom. Membership provides a forum where open source leaders, businesses, and communities engage through member-driven initiatives to increase awareness and adoption of open source software.

About OpenJS Foundation 

The OpenJS Foundation (https://openjsf.org/) is committed to supporting the healthy growth of the JavaScript ecosystem and web technologies by providing a neutral organization to host and sustain projects, as well as collaboratively fund activities for the benefit of the community at large. The OpenJS Foundation is made up of 35 open source JavaScript projects including Appium, Dojo, jQuery, Node.js, and webpack and is supported by 30 corporate and end-user members, including GoDaddy, Google, IBM, Intel, Joyent, and Microsoft. These members recognize the interconnected nature of the JavaScript ecosystem and the importance of providing a central home for projects which represent significant shared value. 

About the Open Source Initiative

For over 20 years, the Open Source Initiative (https://opensource.org/) has worked to raise awareness and adoption of open source software, and build bridges between open source communities of practice. As a global non-profit, the OSI champions software freedom in society through education, collaboration, and infrastructure, stewarding the Open Source Definition (OSD), and preventing abuse of the ideals and ethos inherent to the open source movement.

Project Update: Testing in parallel with Mocha v8.0.0

By Announcement, Blog, Mocha

Use parallel mode to achieve significant speedups for large test suites

This blog was written by Christopher Hiller and was originally posted on the IBM Developer Blog. Mocha is a hosted project at the OpenJS Foundation.

With the release of Mocha v8.0.0, Mocha now supports running in parallel mode under Node.js. Running tests in parallel mode allows Mocha to take advantage of multi-core CPUs, resulting in significant speedups for large test suites.

Read about parallel testing in Mocha’s documentation.

Before v8.0.0, Mocha only ran tests in serial: one test must finish before moving on to the next. While this strategy is not without benefits — it’s deterministic and snappy on smaller test suites — it can become a bottleneck when running a large number of tests.

Let’s take a look at how to take advantage of parallel mode in Mocha by enabling it on a real-world project: Mocha itself!

Installation

Node.js v8.0.0 has reached End-of-Life; Mocha v8.0.0 requires Node.js v10, v12, or v14.

Mocha doesn’t need to install itself, but you might. You need Mocha v8.0.0 or newer, so:

npm i mocha@8 --save-dev

Moving right along…

Use the – – parallel flag

In many cases, all you need to do to enable parallel mode is supply --parallel to the mocha executable. For example:

mocha --parallel test/*.spec.js

Alternatively, you can specify any command-line flag by using a Mocha configuration file. Mocha keeps its default configuration in a YAML file, .mocharc.yml. It looks something like this (trimmed for brevity):

# .mocharc.yml
require: 'test/setup'
ui: 'bdd'
timeout: 300

To enable parallel mode, I’m going to add parallel: true to this file:

# .mocharc.yml w/ parallel mode enabled
require: 'test/setup'
ui: 'bdd'
timeout: 300
parallel: true

Note: Examples below use --parallel and --no-parallel for the sake of clarity.

Let’s run npm test and see what happens!

Spoiler: It didn’t work the first time

Oops, I got a bunch of “timeout” exceptions in the unit tests, which use the default timeout value (300ms, as shown above). Look:

  2) Mocha
       "before each" hook for "should return the Mocha instance":
     Error: Timeout of 300ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/Users/boneskull/projects/mochajs/mocha/test/node-unit/mocha.spec.js)
      at Hook.Runnable._timeoutError (lib/runnable.js:425:10)
      at done (lib/runnable.js:299:18)
      at callFn (lib/runnable.js:380:7)
      at Hook.Runnable.run (lib/runnable.js:345:5)
      at next (lib/runner.js:475:10)
      at Immediate._onImmediate (lib/runner.js:520:5)
      at processImmediate (internal/timers.js:456:21)

That’s weird. I run the tests a second time, and different tests throw “timeout” exceptions. Why?

Because of many variables — from Mocha to Node.js to the OS to the CPU itself — parallel mode exhibits a much wider range of timings for any given test. These timeout exceptions don’t indicate a newfound performance issue; rather, they’re a symptom of a naturally higher system load and nondeterministic execution order.

To resolve this, I’ll increase Mocha’s default test timeout from 300ms (0.3s) to 1000ms (1s):

# .mocharc.yml
# ...
timeout: 1000

Mocha’s “timeout” functionality is not to be used as a benchmark; its intent is to catch code that takes an unexpectedly long time to execute. Since we now expect tests to potentially take longer, we can safely increase the timeout value.

Now that the tests pass, I’m going to try to make them pass more.

Optimizing parallel mode

By default, Mocha’s maximum job count is n – 1, where n is the number of CPU cores on the machine. This default value will not be optimal for all projects. The job count also does not imply that “Mocha gets to use n – 1 CPU cores,” because that’s up to the operating system. It is, however, a default, and it does what defaults do.

When I say “_maximum job count” I mean that Mocha could spawn this many worker processes if needed. It depends on the count and execution time of the test files._

To compare performance, I use the friendly benchmarking tool, hyperfine; I’ll use this to get an idea of how various configurations will perform.

Regarding hyperfine usage: In the below examples, I’m passing two options to hyperfine-r 5 for “runs,” which runs the command five (5) times. The default is ten (10), but this is slow, and I’m impatient. The second option I pass is --warmup 1, which performs a single “warmup” run. The result of this run is discarded. Warmup runs reduce the chance that the first _k runs will be significantly slower than the subsequent runs, which may skew the final result. If this happens, hyperfine will even warn you about it, which is why I’m using it!
If you try this yourself, you need to replace bin/mocha with node_modules/.bin/mocha or mocha, depending on your environment; bin/mocha is the path to the mocha executable relative to the working copy root._

Mocha’s integration tests (about 260 tests over 55 files) typically make assertions about the output of the mocha executable itself. They also need a longer timeout value than the unit tests; below, we use a timeout of ten (10) seconds.

I run the integration tests in serial. Nobody ever claimed they ran at ludicrous speed:

$ hyperfine -r 5 --warmup 1 "bin/mocha --no-parallel --timeout 10s test/integration/**/*.spec.js"
Benchmark #1: bin/mocha --no-parallel --timeout 10s test/integration/**/*.spec.js
  Time (mean ± σ):     141.873 s ±  0.315 s    [User: 72.444 s, System: 14.836 s]
  Range (min … max):   141.447 s … 142.296 s    5 runs

That’s over two (2) minutes. Let’s try it again in parallel mode. In my case, I have an eight-core CPU (n = 8), so by default, Mocha uses seven (7) worker processes:

$ hyperfine -r 5 --warmup 1 "bin/mocha --parallel --timeout 10s test/integration/**/*.spec.js"
Benchmark #1: bin/mocha --parallel --timeout 10s test/integration/**/*.spec.js
  Time (mean ± σ):     65.235 s ±  0.191 s    [User: 78.302 s, System: 16.523 s]
  Range (min … max):   65.002 s … 65.450 s    5 runs

Using parallel mode shaves 76 seconds off of the run, down to just over a minute! That’s almost a 53% speedup. But, can we do better?

I can use the --jobs/-j option to specify exactly how many worker processes Mocha will potentially use. Let’s see what happens if I reduce this number to four (4):

$ hyperfine -r 5 --warmup 1 "bin/mocha --parallel --jobs 4 --timeout 10s test/integration/**/*.spec.js"
Benchmark #1: bin/mocha --parallel --jobs 4 --timeout 10s test/integration/**/*.spec.js
  Time (mean ± σ):     69.764 s ±  0.512 s    [User: 79.176 s, System: 16.774 s]
  Range (min … max):   69.290 s … 70.597 s    5 runs

Unfortunately, that’s slower. What if I increased the number of jobs, instead?

$ hyperfine -r 5 --warmup 1 "bin/mocha --parallel --jobs 12 --timeout 10s test/integration/**/*.spec.js"
Benchmark #1: bin/mocha --parallel --jobs 12 --timeout 10s test/integration/**/*.spec.js
  Time (mean ± σ):     64.175 s ±  0.248 s    [User: 80.611 s, System: 17.109 s]
  Range (min … max):   63.809 s … 64.400 s    5 runs

Twelve (12) is ever-so-slightly faster than the default of seven (7). Remember, my CPU has eight (8) cores. Why does spawning more processes increase performance?

I speculate it’s because these tests aren’t CPU-bound. They mostly perform asynchronous I/O, so the CPU has some spare cycles waiting for tasks to complete. I could spend more time trying to squeeze another 500ms out of these tests, but for my purposes, it’s not worth the bother. Perfect is the enemy of good, right? The point is to illustrate how you can apply this strategy to your own projects and arrive at a configuration you’re happy with.

When to avoid parallel mode

Would you be shocked if I told you that running tests in parallel is not always appropriate? No, you would not be shocked.

It’s important to understand two things:

  1. Mocha does not run individual tests in parallel. Mocha runs test files in parallel.
  2. Spawning worker processes is not free.

That means if you hand Mocha a single, lonely test file, it will spawn a single worker process, and that worker process will run the file. If you only have one test file, you’ll be penalized for using parallel mode. Don’t do that.

Other than the “lonely file” non-use-case, the unique characteristics of your tests and sources will impact the result. There’s an inflection point below which running tests in parallel will be slower than running in serial.

In fact, Mocha’s own unit tests (about 740 tests across 35 files) are a great example. Like good unit tests, they try to run quickly, in isolation, without I/O. I’ll run Mocha’s unit tests in serial, for the baseline:

$ hyperfine -r 5 --warmup 1 "bin/mocha --no-parallel test/*unit/**/*.spec.js"
Benchmark #1: bin/mocha --no-parallel test/*unit/**/*.spec.js
  Time (mean ± σ):      1.262 s ±  0.026 s    [User: 1.286 s, System: 0.145 s]
  Range (min … max):    1.239 s …  1.297 s    5 runs

Now I’ll try running them in parallel. Despite my hopes, this is the result:

$ hyperfine -r 5 --warmup 1 "bin/mocha --parallel test/*unit/**/*.spec.js"
Benchmark #1: bin/mocha --parallel test/*unit/**/*.spec.js
  Time (mean ± σ):      1.718 s ±  0.023 s    [User: 3.443 s, System: 0.619 s]
  Range (min … max):    1.686 s …  1.747 s    5 runs

Objectively, running Mocha’s unit tests in parallel slows them down by about half a second. This is the overhead of spawning worker processes (and the requisite serialization for inter-process communication).

I’ll go out on a limb and predict that many projects having very fast unit tests will see no benefit from running those tests in Mocha’s parallel mode.

Remember my .mocharc.yml? I yanked that parallel: true out of there; instead, Mocha will use it only when running its integration tests.

In addition to being generally unsuitable for these types of tests, parallel mode has some other limitations; I’ll discuss these next.

Caveats, disclaimers and gotchas, Oh my

Due to technical limitations (i.e., “reasons”), a handful of features aren’t compatible with parallel mode. If you try, Mocha will throw an exception.

Check out the docs for more information and workarounds (if any).

Unsupported reporters

If you’re using the markdownprogress, or json-stream reporters, you’re out of luck for now. These reporters need to know how many tests we intend to execute up front, and parallel mode does not have that information.

This could change in the future, but would introduce breaking changes to these reporters.

Exclusive tests

Exclusive tests (.only()) do not work. If you try, Mocha runs tests (as if .only() was not used) until it encounters usage of .only(), at which point it aborts and fails.

Given that exclusive tests are typically used in a single file, parallel mode is also unsuitable for this situation.

Unsupported options

Incompatible options include --sort--delay, and importantly, --file. In short, it’s because we cannot run tests in any specific order.

Of these, --file likely impacts the greatest number of projects. Before Mocha v8.0.0, --file was recommended to define “root hooks.” Root hooks are hooks (such as beforeEach()after()setup(), etc.) which all other test files will inherit. The idea is that you would define root hooks in, for example, hooks.js, and run Mocha like so:

mocha --file hooks.js "test/**/*.spec.js"

All --file parameters are considered test files and will be run in order and before any other test files (in this case, test/**/*.spec.js). Because of these guarantees, Mocha “bootstraps” with the hooks defined in hooks.js, and this affects all subsequent test files.

This still works in Mocha v8.0.0, but only in serial mode. But wait! Its use is now strongly discouraged (and will eventually be fully deprecated). In its place, Mocha has introduced Root Hook Plugins.

Root Hook Plugins

Root Hook Plugins are modules (CJS or ESM) which have a named export, mochaHooks, in which the user can freely define hooks. Root Hook Plugin modules are loaded via Mocha’s --require option.

Read the docs on using Root Hook Plugins

The documentation (linked above) contains a thorough explanation and more examples, but here’s a straightforward one.

Say you have a project with root hooks loaded via --file hooks.js:

// hooks.js
beforeEach(function() {
  // do something before every test
  this.timeout(5000); // trivial example
});

To convert this to a Root Hook Plugin, change hooks.js to be:

// hooks.js
exports.mochaHooks = {
  beforeEach() {
    this.timeout(5000);
  }
};

Tip: This can be an ESM module, for example, hooks.mjs; use a named export of mochaHooks.

When calling the mocha executable, replace --file hooks.js with --require hooks.js. Nifty!

Troubleshooting parallel mode

While parallel mode should just work for many projects, if you’re still having trouble, refer to this checklist to prepare your tests:

  • ✅ Ensure you are using a supported reporter.
  • ✅ Ensure you are not using other unsupported flags.
  • ✅ Double-check your config file; options set in config files will be merged with any command-line option.
  • ✅ Look for root hooks (they look like this) in your tests. Move them into a root hook plugin.
  • ✅ Do any assertion, mock, or other test libraries you’re consuming use root hooks? They may need to be migrated for compatibility with parallel mode.
  • ✅ If tests are unexpectedly timing out, you may need to increase the default test timeout (via --timeout)
  • ✅ Ensure your tests do not depend on being run in a specific order.
  • ✅ Ensure your tests clean up after themselves; remove temp files, handles, sockets, etc. Don’t try to share state or resources between test files.

What’s next

Parallel mode is new and not perfect; there’s room for improvement. But to do so, Mocha needs your help. Send the Mocha team your feedback! Please give Mocha v8.0.0 a try, enable parallel mode, use Root Hook Plugins, and share your thoughts.

Dojo turns 16, New Dojo 7 Delivers Suite of Reactive Material Widgets

By Announcement, Blog, Dojo

Dojo, an OpenJS Foundation Impact Project, just hit a new milestone. Dojo 7 is a progressive framework for modern web applications built with TypeScript. That means Dojo is an essential tool for building modern websites. The Dojo framework scales easily and allows building anything from simple static websites all the way up to enterprise-scale single-page reactive web applications. 

Dojo 7 Widgets takes a step forward in out-of-the-box usability, adding 20+ new widgets and a Material theme that developers can use to build feature-rich applications even faster, including new widgets that are consistent, usable, and easily accessible for important website building blocks like cards, passwords, forms, and more. 

See the Dojo Widgets documentation and examples for more information. 

Dojo’s no flavor-of-the-month JavaScript framework. The Dojo Toolkit was started in 2004 as part of a non-profit organization that was established to promote the adoption of the Dojo Toolkit. In 2016, the foundation merged with jQuery Foundation to become JS Foundation. Then in March 2019 the JS Foundation merged with the Node Foundation to become the OpenJS Foundation. Dojo, therefore, gives the OpenJS Foundation organizational roots that predate the iPhone.

In 2018, modern Dojo arrived with Dojo 2, a complete rewrite and rethink of Dojo into its current form of a lean modern TypeScript-first, batteries included progressive framework. Aligning with modern standards and best practices, the resulting distribution build of Dojo can include zero JavaScript code for statically built websites, or as little as 13KB of compressed JavaScript for full-featured web apps.

Dojo has been used widely over the years by companies such as Cisco, JP Morgan, Esri, Intuit, ADP, Fannie Mae, Daimler, and many more.  Applications created with the Dojo Toolkit more than 10 years ago still work today with only minor adjustments and upgrades.

Modern Dojo is open source software available under the modified BSD license. Developers can try modern Dojo from Code Sandbox, or install Dojo via npm:

npm i @dojo/cli @dojo/cli-create-app -g

Create your first app

dojo create app --name hello-world

Get started with widgets

npm install @dojo/widgets 

Visit dojo.io for documentation, tutorials, cookbooks, and other materials. Read Dojo’s blog on this new release here.

How The Weather Company uses Node.js in production

By Announcement, Blog, ESLint, member blog, Node.js

Using Node.js improved site speed, performance, and scalability

This piece was written by By Noel Madali and originally appeared on the IBM Developer Blog. IBM is a member of the OpenJS Foundation.

The Weather Company uses Node.js to power their weather.com website, a multinational weather information and news website available in 230+ locales and localized in about 60 languages. As an industry leader in audience reach and accuracy, weather.com delivers weather data, forecasts, observations, historical data, news articles, and video.

Because weather.com offers a location-based service that is used throughout the world, its infrastructure must support consistent uptime, speed, and precise data delivery. Scaling the solution to billions of unique locations has created multiple technical challenges and opportunities for the technical team. In this blog post, we cover some of the unique challenges we had to overcome when building weather.com and discuss how we ended up using Node.js to power our internationalized weather application.

Drupal ‘n Angular (DNA): The early days

In 2015, we were a Drupal ‘n Angular (DNA) shop. We unofficially pioneered the industry by marrying Drupal and Angular together to build a modular, content-based website. We used Drupal as our CMS to control content and page configuration, and we used Angular to code front-end modules.

Front-end modules were small blocks of user interfaces that had data and some interactive elements. Content editors would move around the modules to visually create a page and use Drupal to create articles about weather and publish it on the website.

DNA was successful in rapidly expanding the website’s content and giving editors the flexibility to create page content on the fly.

As our usage of DNA grew, we faced many technical issues which ultimately boiled down to three main themes:

  • Poor performance
  • Instability
  • Slower time for developers to fix, enhance, and deploy code (also known as velocity)

Poor performance

Our site suffered from poor performance, with sluggish load times and unreliable availability. This, in turn, directly impacted our ad revenue since a faster page translated into faster ad viewability and more revenue generation.

To address some of our performance concerns, we conducted different front-end experiments.

  • We analyzed and evaluated modules to determine what we could change. For example, we evaluated getting rid of some modules that were not used all the time or we rewrote modules so they wouldn’t use giant JS libraries.
  • We evaluated our usage of a tag manager in reference to ad serving performance.
  • We added lazy-loaded modules to remove the module on first load in order to reduce the amount of JavaScript served to the client.

Instability

Because of the fragile deployment process of using Drupal with Angular, our site suffered from too much downtime. The deployment process was a matter of taking the name of a git branch and entering it into a UI to get released into different environments. There was no real build process, but only version control.

Ultimately, this led to many bad practices that impacted developers including lack of version control methodology, non-reproduceable builds, and the like.

Slower developer velocity

The majority of our developers had front-end experience, but very few them were knowledgeable about the inner workings of Drupal and PHP. As such, features and bug fixes related to PHP were not addressed as quickly due to knowledge gaps.

Large deployments contributed to slower velocity as well as stability issues, where small changes could break the entire site. Since a deployment was the entire codebase (Drupal, Drupal plugins/modules, front-end code, PHP scripts, etc), small code changes in a release could easily get overlooked and not be properly tested, breaking the deployment.

Overall, while we had a few quick wins with DNA, the constant regressions due to the setup forced us to consider alternative paths for our architecture.

Rethinking our architecture to include Node.js

Our first foray into using Node.js was a one-off project for creating a lite experience for weather.com that was completely server-side rendered and had minimal JavaScript. The audience had limited bandwidth and minimal device capabilities (for example, low-end smartphones using Facebook’s Free Basics).

Stakeholders were happy with the lite experience, commenting on the nearly instantaneous page loads. Analyzing this proof-of-concept was important in determining our next steps in our architectural overhaul.

Differing from DNA, the lite experience:

  • Rendered pages as server side only
  • Kept the front-end footprint under 30KB (virtually no JavaScript, little CSS, few images).

We used what we learned with the lite experience to help us and serve our website more performantly. This started with rethinking our DNA architecture.

Metrics to measure success

Before we worked on a new architecture, we had to show our business that a re-architecture was needed. The first thing we had to determine was what to measuring to show success.

We consulted with the Google Ad team to understand how exactly a high-performing webpage impacts business results. Google showed us proof that improving page speed increases ad viewability which translates to revenue.

With that in hand, each day we conducted tests across a set of pages to measure:

  • Speed index
  • Time to first interaction
  • Bytes transferred
  • Time to first ad call

We used a variety of tools to collect our metrics: WebPageTest, Lighthouse, sitespeed.io.

As we compiled a list of these metrics, we were able to judge whether certain experiments were beneficial or not. We used our analysis to determine what needed to change in our architecture to make the site more successful.

While we intended to completely rewrite our DNA website, we acknowledged that we needed to stair step our approach for experimenting with a newer architecture. Using the above methodology, we created a beta page and A/B tested it to verify its success.

From Shark Tank to a beta of our architecture

Recognizing the performance of our original Node.js proof of concept, we held a “Shark Tank” session where we presented and defended different ideal architectures. We evaluated whole frameworks or combinations of libraries like Angular, React, Redux, Ember, lodash, and more.

From this experiment, we collectively agreed to move from our monolithic architecture to a Node.js backend and newer React frontend. Our timeline for this migration was between nine months to a year.

Ultimately, we decided to use a pattern of small JS libraries and tools, similar to that of a UNIX operating system’s tool chain of commands. This pattern gives us the flexibility to swap out one component from the whole application instead of having to refactor large amounts of code to include a new feature.

On the backend, we needed to decouple page creation and page serving. We kept Drupal as a CMS and created a way for documents to be published out to more scalable systems which can be read by other services. We followed the pattern of Backends for Frontends (BFF), which allowed us to decouple our page frontends and allow for more autonomy of our backend downstream systems. We use the documents published by the CMS to deliver pages with content (instead of the traditional method of the CMS monolith serving the pages).

Even though Drupal and PHP can render server-side, our developers were more familiar with JavaScript, so using Node.js to implement isomorphic (universal) rendering of the site increased our development velocity.

flow

Developing with Node.js was an easy focus shift for our previous front-end oriented developers. Since the majority of our developers had a primarily JavaScript background, we stayed away from solutions that revolved around separate server-side languages for rendering.

Over time, we implemented and evolved our usage from our first project. After developing our first few pages, we decided to move away from ExpressJS to Koa to use newer JS standards like async/await. We started with pure React but switched to React-like Inferno.js.

After evaluating many different build systems (gulp, grunt, browserify, systemjs, etc), we decided to use Webpack to facilitate our build process. We saw Webpack’s growing maturity in a fast-paced ecosystem, as well as the pitfalls of its competitors (or lack thereof).

Webpack solved our core issue of DNA’s JS aggregation and minification. With a centralized build process, we could build JS code using a standardized module system, take advantage of the npm ecosystem, and minify the bundles (all during the build process and not during runtime).

Moving from client-side to server-side rendering of the application increased our speed index and got information to the user faster. React helped us in this aspect of universal rendering–being able to share code on both the frontend and backend was crucial to us for server-side rendering and code reuse.

Our first launch of our beta page was a Single Page App (SPA). Traditionally, we had to render each page and location as a hit back to the origin server. With the SPA, we were able to reduce our hits back to the origin server and improve the speed of rendering the next view thanks to universal rendering.

The following image shows how much faster the webpage response was after the SPA was introduced.

flow

As our solution included more Node.js, we were able to take advantage of a lot of the tooling associated with a Node.js ecosystem, including ESLint for linting, Jest for testing, and eventually Yarn for package management.

Linting and testing, as well as a more refined CI/CD pipeline, helped reduce bugs in production. This led to a more mature and stable platform as a whole, higher engineering velocity, and increased developer happiness.

Changing deployment strategies

Recognizing our problems with our DNA deployments, we knew we needed a better solution for delivering code to infrastructure. With our DNA setup, we used a managed system to deploy Drupal. For our new solution, we decided to take advantage of newer, container-based deployment and infrastructure methodologies.

By moving to Docker and Kubernetes, we achieved many best practices:

  • Separating out disparate pages into different services reduces failures
  • Building stateless services allows for less complexity, ease of testing, and scalability
  • Builds are repeatable (Docker images ensure the right artifacts are deployed and consistent) Our Kubernetes deployment allowed us to be truly distributed across four regions and seven clusters, with dozens of services scaled from 3 to 100+ replicas running on 400+ worker nodes, all on IBM Cloud.

Addressing a familiar set of performance issues

After running a successful beta experiment, we continued down the path of migrating pages into our new architecture. Over time, some familiar issues cropped up:

  • Pages became heavier
  • Build times were slower
  • Developer velocity decreased

We had to evolve our architecture to address these issues.

Beta v2: Creating a more performant page

Our second evolution of the architecture was a renaissance (rebirth). We had to go back to the basics and revisit our lite experience and see why it was successful. We analyzed our performance issues and came to a conclusion that the SPA was becoming a performance bottleneck. Although SPA benefits second page visits, we came to an understanding that majority of our users visit the website and leave once they get their information.

We designed and built the solution without a SPA, but kept React hydration in order to keep code reuse across the server and client-side. We paid more attention to the tooling during development by ensuring that code coverage (the percentage of JS client code used vs delivered) was more efficient.

Removing the SPA overall was key to reducing build times as well. Since a page was no longer stitched together from a singular entry point, we split the Webpack builds so that individual pages can have their own set of JS and assets.

We were able to reduce our page weight even more compared to the Beta site. Reducing page weight had an overall impact on page load times. The graph below shows how speed index decreased.

flow Note: Some data was lost in between January through October of 2019.

This architecture is now our foundation for any and all pages on weather.com.

Conclusion

weather.com was not transformed overnight and it took a lot of work to get where we are today. Adding Node.js to our ecosystem required some amount of trial and error.

We achieved success by understanding our issues, collecting metrics, and implementing and then reimplementing solutions. Our journey was an evolution. Not only was it a change to our back end, but we had to be smarter on the front end to achieve the best performance. Changing our deployment strategy and infrastructure allowed us to achieve multiple best practices, reduce downtimes, and improve overall system stability. JavaScript being used in both the back end and front end improved developer velocity.

As we continue to architect, evolve, and expand our solution, we are always looking for ways to improve. Check out weather.com on your desktop, or for our newer/more performant version, check out our mobile web version on your mobile device.

Introducing OpenJS Foundation Open Office Hours

By Announcement, Blog, Office Hours

This piece was written by Joe Sepi, OpenJS Foundation Cross Project Council Chair

Kai Cataldo from ESlint during a recent Office Hours.
Kai Cataldo from ESlint during a recent Office Hours.

Earlier this year, to help our community better understand ways to participate, as well to provide hosted projects ways to showcase what they are working on, I started hosting bi-weekly Open Office Hours. 

The goal of Office Hours is to give members of our community a place to ask questions, get guidance on onboarding, and learn more about other projects in the Foundation. It has also served as a place for current projects to get connected to the wider OpenJS Foundation community and share key learnings. 

So far, we’ve had Wes Todd from Express Project, Alexandr Tovmach from Node.js i18n, Saulo Nunes talking through ways to contribute to Node.js and Kai Cataldo from ESlint. You can find all the previously recorded sessions and upcoming schedule at github.com/openjs-foundation/open-office-hours

Everyone is invited to attend.

How Can I Join?
These meetings will take place every other Thursday at 10 am PT, 12 pm CT, 1 pm ET and are scheduled on the OpenJS Public Calendar. Here’s the zoom link to join these sessions. 

While everyone is encouraged to join the call and the initiative, if you are unable to attend a session but would like to get more involved or have more questions, please open an issue in the repo.

Let’s do more in open source together!

Project News: Electron, releases a new version

By Announcement, Blog, Electron, Project Update

Congrats to the Electron team on their latest version release, Electron 9.0!

It includes upgrades to Chromium 83, V8 8.3, and Node.js 12.14. They’ve added several new API integrations for their spellchecker feature, enabled PDF viewer, and much more!

Read about all the details on the Electron blog here.

Learn more about Electron and why it has joined the Foundation as an incubation project.

New Node.js Training Course Supports Developers in their Certification, Technical and Career Goals

By Announcement, Blog, Certification, Node.js

Last October, the OpenJS Foundation in partnership with The Linux Foundation, released two Node.js certification exams to better support Node.js developers through showcasing their skills in the JavaScript framework. Today, we are thrilled to unveil the next phase of the OpenJS certification and training program with a new training course, LFW211 – Node.js Application Development.

LFW211 is a vendor-neutral training geared toward developers who wish to master and demonstrate creating Node.js applications. The course trains developers on a broad range of Node.js capabilities at depth, equipping them with rigorous foundational skills and knowledge that will translate to building any kind of Node.js application or library.

By the end of the course, participants:

  • Understand foundational essentials for Node.js and JavaScript development
  • Become skillful with Node.js debugging practices and tools
  • Efficiently interact at a high level with I/O, binary data and system metadata
  • Attain proficiency in creating and consuming ecosystem/innersource libraries

Node.js Application Development also will help prepare those planning to take the OpenJS Node.js Application Developer certification exam. A bundled offering including access to both the training course and certification exam is also available.

Thank you to David Clements who developed this key training. Dave is a Principal Architect, public speaker, author of the Node Cookbook, and open source creator specializing in Node.js and browser JavaScript. David is also one of the technical leads and authors of the official OpenJS Node.js Application Developer Certification and OpenJS Node.js Services Developer Certification.

Node.js is one of the most popular JavaScript frameworks in the world. It powers hundreds of thousands of websites, including some of the most popular like Google, IBM, Microsoft and Netflix. Individual developers and enterprises use Node.js to power many of their most important web applications, making it essential to maintain a stable pool of qualified talent.

Ready to take the training? The course is available now. The $299 course fee – or $499 for a bundled offering of both the course and related certification exam – provides unlimited access to the course for one year to all content and labs. This course and exam, in addition to all Linux Foundation training courses and certification exams, are discounted 30% through May 31 by using code ANYWHERE30. Interested individuals may enroll here.

Node-RED Creators AMA Recap

By AMA, Blog, Node-RED
Node-RED AMA participants answer community questions live.

The creators of Node-RED recently gave an informative Ask Me Anything (AMA) which you can watch below. Node-RED is a Growth Project at the OpenJS Foundation. Speakers include Nick O’Leary (@knolleary), Dave Conway-Jones (@ceejay), and John Walicki (@johnwalicki).

This AMA can help individuals interested in Node-RED get a better understanding of the frameworking tool. Using a combination of user generated and preexisting questions, the discussion focuses heavily on the processes employed by the creators of Node-RED to optimize the tool.

The creators of Node-RED answered questions from the live chat, giving insight into how Node-RED is iterated and improved. Questions ranged from where Node-RED has gone in the last 7 years to whether or not Node-RED is a prototyping tool. 

Full Video Here

Video by Section 

  1. Introductions (0:00)
  2. What’s been going on last 7 years (2:21
  3. Did you have use cases in mind? (4:46)
  4. What’s it been like to work with open source? (7:10)
  5. Why is Node red so popular in iOT space? (9:30)
  6. Where else is node red popular? (12:25)
  7. How do you answer the question “is it a prototyping tool?” (15:00)
  8. Where does Node-RED fit in the low programming world? (17:20)
  9. 2020 Recap, what’s next? (20:00)
  10.  New features in 1.1? (23:10)
  11.  Flow change for nodes? (26:00)
  12.  Thoughts about encryption? (28:40)
  13.  Do you see Node-RED scaling? (31:50)
  14.  Best practices for sharing readable flows (34:15)
  15.  Do you have large applications and flows being created now? (37:15)
  16.  What would you say to a developer who should use Node-RED? (40:00)
  17.  What can developers help? (41:25)
  18.  Open is a mindset, how do you wade through forums and open source? (43:30)
  19.  YouTube Q, on the edge constrained environment modeling? (45:40)
  20.  Node Red + AI? (48:50)
  21.  POV on containerization (50:30)
  22.  Refreshing node red dashboard, thoughts on replacing framework? (53:40)
  23.  Node Red conference? (57:45)
  24.  Last thoughts? (59:25)

Our next AMA is with the Node.js Security Working Group on June 3 at 9 am PT. Submit your questions here.