The OpenJS In Action series features companies that use OpenJS Foundation projects to develop efficient, effective web technologies.
Esri uses OpenJS Foundation projects such as Dojo Toolkit, Grunt, ESLint and Intern to increase developer productivity and deliver high-quality applications that help the world fight back against the pandemic.
Esri’s contributions to the COVID response effort and an explanation of how they created the underlying technologies are available at this video:
Robin Ginn, Executive Director of the OpenJS Foundation, spoke with Kristian Ekenes, Product Engineer at Esri, to highlight the work his company has been doing. Esri normally creates mapping software, databases and tools to help businesses manage spatial data. However, Ekenes started work on a tool called Capacity Analysis when the COVID-19 pandemic began to spread.
Capacity Analysis is a configurable app that allows organizations to display and interact with results from two scenarios predicting a hospital’s ability to meet the demand of COVID-19 patients given configurable parameters, such as the percentage of people following social distancing guidelines. Health experts can create two hypothetical scenarios using one of two models: Penn Medicine’s COVID-19 Hospital Impact Model for Epidemics (CHIME) or the CDC’s COVID-19Surge model. Then they can deploy their own version of Capacity Analysis to view how demand for hospital beds, ICU beds, and ventilators varies by time and geography in each scenario. This tool is used by governments worldwide to better predict how the pandemic will challenge specific areas.
During the interview, Ekenes spoke on the challenges that come with taking on ambitious projects like Capacity Analysis. Esri has both a large developer team and a diverse ecosystem of applications. This makes it difficult to maintain consistency in the API and SDKs deployed across desktop and mobile platforms. To overcome these challenges, Esri utilizes several OpenJS Foundation projects including Dojo Toolkit, Grunt, ESLint and Intern.
Ekenes explained that Grunt and ESLint increase developer productivity by providing real-time feedback when writing code. The linter also standardizes work across developers by indicating when incorrect practices are being used. This reduces the number of pull requests between collaborators and saves time for the entire team. Intern allows developers to write testing modules and create high-quality apps by catching bugs early. In short, Esri helps ensure consistent and thoroughly tested applications by incorporating OpenJS Foundation projects into their work.
The OpenJS In Action series features companies that use OpenJS Foundation projects to help develop efficient, effective web technologies.
Expedia is an example of how adoption of new technologies and techniques can improve customer and developer experiences.
Robin Ginn, executive director of the OpenJS Foundation, interviewed Tiffany Le-Nguyen, Software Development Engineer at Expedia Group. Le-Nguyen explained how accessibility and performance concerns led developers to modernize Expedia’s infrastructure. One of the choices they made was to integrate ESLint into their testing pipeline to catch bugs and format input before content was pushed live. ESLint also proved to be a huge time-saver — it enforced development standards and warned developers when incorrect practices were being used.
However, Expedia is used globally to book properties and dates for trips. Users reserve properties with different currencies across different time zones. This makes it difficult to track when a property was reserved and whether the correct amount was paid. Luckily, Expedia was able to utilize Globalize, an OpenJS project that provides number formatting and parsing, date and time formatting and currency formatting for languages across the world. Le-Nguyen was able to simplify currency tracking across continents by integrating Globalize into the project.
To end the talk, Le-Nguyen suggested that web developers should take another look into UI testing. Modern testing tools have simplified the previously clunky process. Proper implementation of a good testing pipeline improves the developer experience and leads to a better end product for the user.
In the middle of a project it can be difficult to identify redundant or problematic sections of code. Frustrations compound when multiple developers, all using different styles, need to collaborate and write using a unified format. However, the aforementioned problems can be avoided with gentle reminders and automated feedback.
This is where linters come in. Linters are essentially spell checkers for code. They highlight syntax errors, refactoring opportunities and style/formatting issues as code is written. This helps developers to notice and fix small errors before they become a major problem. ESLint is an especially powerful tool that identifies problematic sections of code based on the user’s plugins, rules and configurations. Due to its flexibility, it has become an incredibly popular addition to many projects. Kai Cataldo, a maintainer of ESLint, and Brandon Mills, a member of the Technical Steering Committee, answered questions and explained how to get started with the OpenJS project in a recent AMA.
In the talk, Cataldo clarified that ESLint is designed to help developers write the code they want. It should not tell them whether their code is “right” or “wrong” unless the error is code-breaking. The default settings of ESLint help developers to recognize syntax errors early on, but the tool can be made more powerful with the addition of user defined rules or downloadable configurations. Additionally, teams can standardize code across a team by defining a set of rules for the linter. Therefore, developers save time by writing consistent code that can be easily understood by other members.
Cataldo and Mills also revealed future plans for ESLint — updated documentation, simplified configuration and parallel linting. They also discussed common problems of linters and how developers can contribute to the project to make ESLint even more powerful.
The Weather Company uses Node.js to power their weather.com website, a multinational weather information and news website available in 230+ locales and localized in about 60 languages. As an industry leader in audience reach and accuracy, weather.com delivers weather data, forecasts, observations, historical data, news articles, and video.
Because weather.com offers a location-based service that is used throughout the world, its infrastructure must support consistent uptime, speed, and precise data delivery. Scaling the solution to billions of unique locations has created multiple technical challenges and opportunities for the technical team. In this blog post, we cover some of the unique challenges we had to overcome when building weather.com and discuss how we ended up using Node.js to power our internationalized weather application.
Drupal ‘n Angular (DNA): The early days
In 2015, we were a Drupal ‘n Angular (DNA) shop. We unofficially pioneered the industry by marrying Drupal and Angular together to build a modular, content-based website. We used Drupal as our CMS to control content and page configuration, and we used Angular to code front-end modules.
Front-end modules were small blocks of user interfaces that had data and some interactive elements. Content editors would move around the modules to visually create a page and use Drupal to create articles about weather and publish it on the website.
DNA was successful in rapidly expanding the website’s content and giving editors the flexibility to create page content on the fly.
As our usage of DNA grew, we faced many technical issues which ultimately boiled down to three main themes:
Slower time for developers to fix, enhance, and deploy code (also known as velocity)
Our site suffered from poor performance, with sluggish load times and unreliable availability. This, in turn, directly impacted our ad revenue since a faster page translated into faster ad viewability and more revenue generation.
To address some of our performance concerns, we conducted different front-end experiments.
We analyzed and evaluated modules to determine what we could change. For example, we evaluated getting rid of some modules that were not used all the time or we rewrote modules so they wouldn’t use giant JS libraries.
We evaluated our usage of a tag manager in reference to ad serving performance.
Because of the fragile deployment process of using Drupal with Angular, our site suffered from too much downtime. The deployment process was a matter of taking the name of a git branch and entering it into a UI to get released into different environments. There was no real build process, but only version control.
Ultimately, this led to many bad practices that impacted developers including lack of version control methodology, non-reproduceable builds, and the like.
Slower developer velocity
The majority of our developers had front-end experience, but very few them were knowledgeable about the inner workings of Drupal and PHP. As such, features and bug fixes related to PHP were not addressed as quickly due to knowledge gaps.
Large deployments contributed to slower velocity as well as stability issues, where small changes could break the entire site. Since a deployment was the entire codebase (Drupal, Drupal plugins/modules, front-end code, PHP scripts, etc), small code changes in a release could easily get overlooked and not be properly tested, breaking the deployment.
Overall, while we had a few quick wins with DNA, the constant regressions due to the setup forced us to consider alternative paths for our architecture.
Rethinking our architecture to include Node.js
Stakeholders were happy with the lite experience, commenting on the nearly instantaneous page loads. Analyzing this proof-of-concept was important in determining our next steps in our architectural overhaul.
Differing from DNA, the lite experience:
Rendered pages as server side only
We used what we learned with the lite experience to help us and serve our website more performantly. This started with rethinking our DNA architecture.
Metrics to measure success
Before we worked on a new architecture, we had to show our business that a re-architecture was needed. The first thing we had to determine was what to measuring to show success.
We consulted with the Google Ad team to understand how exactly a high-performing webpage impacts business results. Google showed us proof that improving page speed increases ad viewability which translates to revenue.
With that in hand, each day we conducted tests across a set of pages to measure:
We used a variety of tools to collect our metrics: WebPageTest, Lighthouse, sitespeed.io.
As we compiled a list of these metrics, we were able to judge whether certain experiments were beneficial or not. We used our analysis to determine what needed to change in our architecture to make the site more successful.
While we intended to completely rewrite our DNA website, we acknowledged that we needed to stair step our approach for experimenting with a newer architecture. Using the above methodology, we created a beta page and A/B tested it to verify its success.
From Shark Tank to a beta of our architecture
Recognizing the performance of our original Node.js proof of concept, we held a “Shark Tank” session where we presented and defended different ideal architectures. We evaluated whole frameworks or combinations of libraries like Angular, React, Redux, Ember, lodash, and more.
From this experiment, we collectively agreed to move from our monolithic architecture to a Node.js backend and newer React frontend. Our timeline for this migration was between nine months to a year.
Ultimately, we decided to use a pattern of small JS libraries and tools, similar to that of a UNIX operating system’s tool chain of commands. This pattern gives us the flexibility to swap out one component from the whole application instead of having to refactor large amounts of code to include a new feature.
On the backend, we needed to decouple page creation and page serving. We kept Drupal as a CMS and created a way for documents to be published out to more scalable systems which can be read by other services. We followed the pattern of Backends for Frontends (BFF), which allowed us to decouple our page frontends and allow for more autonomy of our backend downstream systems. We use the documents published by the CMS to deliver pages with content (instead of the traditional method of the CMS monolith serving the pages).
Over time, we implemented and evolved our usage from our first project. After developing our first few pages, we decided to move away from ExpressJS to Koa to use newer JS standards like async/await. We started with pure React but switched to React-like Inferno.js.
After evaluating many different build systems (gulp, grunt, browserify, systemjs, etc), we decided to use Webpack to facilitate our build process. We saw Webpack’s growing maturity in a fast-paced ecosystem, as well as the pitfalls of its competitors (or lack thereof).
Webpack solved our core issue of DNA’s JS aggregation and minification. With a centralized build process, we could build JS code using a standardized module system, take advantage of the npm ecosystem, and minify the bundles (all during the build process and not during runtime).
Moving from client-side to server-side rendering of the application increased our speed index and got information to the user faster. React helped us in this aspect of universal rendering–being able to share code on both the frontend and backend was crucial to us for server-side rendering and code reuse.
Our first launch of our beta page was a Single Page App (SPA). Traditionally, we had to render each page and location as a hit back to the origin server. With the SPA, we were able to reduce our hits back to the origin server and improve the speed of rendering the next view thanks to universal rendering.
The following image shows how much faster the webpage response was after the SPA was introduced.
As our solution included more Node.js, we were able to take advantage of a lot of the tooling associated with a Node.js ecosystem, including ESLint for linting, Jest for testing, and eventually Yarn for package management.
Linting and testing, as well as a more refined CI/CD pipeline, helped reduce bugs in production. This led to a more mature and stable platform as a whole, higher engineering velocity, and increased developer happiness.
Changing deployment strategies
Recognizing our problems with our DNA deployments, we knew we needed a better solution for delivering code to infrastructure. With our DNA setup, we used a managed system to deploy Drupal. For our new solution, we decided to take advantage of newer, container-based deployment and infrastructure methodologies.
By moving to Docker and Kubernetes, we achieved many best practices:
Separating out disparate pages into different services reduces failures
Building stateless services allows for less complexity, ease of testing, and scalability
Builds are repeatable (Docker images ensure the right artifacts are deployed and consistent) Our Kubernetes deployment allowed us to be truly distributed across four regions and seven clusters, with dozens of services scaled from 3 to 100+ replicas running on 400+ worker nodes, all on IBM Cloud.
Addressing a familiar set of performance issues
After running a successful beta experiment, we continued down the path of migrating pages into our new architecture. Over time, some familiar issues cropped up:
Pages became heavier
Build times were slower
Developer velocity decreased
We had to evolve our architecture to address these issues.
Beta v2: Creating a more performant page
Our second evolution of the architecture was a renaissance (rebirth). We had to go back to the basics and revisit our lite experience and see why it was successful. We analyzed our performance issues and came to a conclusion that the SPA was becoming a performance bottleneck. Although SPA benefits second page visits, we came to an understanding that majority of our users visit the website and leave once they get their information.
We designed and built the solution without a SPA, but kept React hydration in order to keep code reuse across the server and client-side. We paid more attention to the tooling during development by ensuring that code coverage (the percentage of JS client code used vs delivered) was more efficient.
Removing the SPA overall was key to reducing build times as well. Since a page was no longer stitched together from a singular entry point, we split the Webpack builds so that individual pages can have their own set of JS and assets.
We were able to reduce our page weight even more compared to the Beta site. Reducing page weight had an overall impact on page load times. The graph below shows how speed index decreased.
Note: Some data was lost in between January through October of 2019.
This architecture is now our foundation for any and all pages on weather.com.
weather.com was not transformed overnight and it took a lot of work to get where we are today. Adding Node.js to our ecosystem required some amount of trial and error.
As we continue to architect, evolve, and expand our solution, we are always looking for ways to improve. Check out weather.com on your desktop, or for our newer/more performant version, check out our mobile web version on your mobile device.
Congrats to the ESLint team on their most recent release, v7.0.0! This new release brings new updates including improved developer experience, core rule changes, new ESLint class, and much more.
ESLint v7.0.0 Highlights:
Dropping support for Node.js v8
Node.js 8 reached EOL in December 2019, and we are officially dropping support for it in this release.
Core rule changes
The ten Node.js/CommonJS rules in core have been deprecated and moved to the eslint-plugin-node plugin.
Several rules have been updated to recognize bigint literals and warn on more cases by default.
eslint:recommended has been updated with a few new rules: no-dupe-else-if, no-import-assign, and no-setter-return.
Improved developer experience
The default ignore patterns have been updated. ESLint will no longer ignore .eslintrc.js and bower_components/* by default. Additionally, it will now ignore nested node_modules directories by default.
ESLint will now lint files with extensions other than .js if they are explicitly defined in overrides.files – no need to use the --ext flag!
ESLint now supports descriptions in directive comments, so things like disable comments can now be clearly documented!
Additional validation has been added to the RuleTester class to improve testing custom rules in plugins.
ESLint will now resolve plugins relative to the entry configuration file. This means that shared configuration files that are located outside the project can now be colocated with the plugins they require.
Starting in ESLint v7, configuration files and ignore files passed to ESLint using the –config path/to/a-config and –ignore-path path/to/a-ignore CLI flags, respectively, will resolve from the current working directory rather than the file location. This allows for users to utilize shared plugins without having to install them directly in their project.
New ESLint class
The CLIEngine class provides a synchronous API that is blocking the implementation of features such as parallel linting, supporting ES modules in shareable configs/parsers/plugins/formatters, and adding the ability to visually display the progress of linting runs. The new ESLint class provides an asynchronous API that ESLint core will now using going forward. CLIEngine will remain in core for the foreseeable future but may be removed in a future major version.