Dojo, an OpenJS Foundation Impact Project, just hit a new milestone. Dojo 7 is a progressive framework for modern web applications built with TypeScript. That means Dojo is an essential tool for building modern websites. The Dojo framework scales easily and allows building anything from simple static websites all the way up to enterprise-scale single-page reactive web applications.
Dojo 7 Widgets takes a step forward in out-of-the-box usability, adding 20+ new widgets and a Material theme that developers can use to build feature-rich applications even faster, including new widgets that are consistent, usable, and easily accessible for important website building blocks like cards, passwords, forms, and more.
Dojo has been used widely over the years by companies such as Cisco, JP Morgan, Esri, Intuit, ADP, Fannie Mae, Daimler, and many more. Applications created with the Dojo Toolkit more than 10 years ago still work today with only minor adjustments and upgrades.
Modern Dojo is open source software available under the modified BSD license. Developers can try modern Dojo from Code Sandbox, or install Dojo via npm:
npm i @dojo/cli @dojo/cli-create-app -g
Create your first app
dojo create app --name hello-world
Get started with widgets
npm install @dojo/widgets
Visit dojo.io for documentation, tutorials, cookbooks, and other materials. Read Dojo’s blog on this new release here.
The Weather Company uses Node.js to power their weather.com website, a multinational weather information and news website available in 230+ locales and localized in about 60 languages. As an industry leader in audience reach and accuracy, weather.com delivers weather data, forecasts, observations, historical data, news articles, and video.
Because weather.com offers a location-based service that is used throughout the world, its infrastructure must support consistent uptime, speed, and precise data delivery. Scaling the solution to billions of unique locations has created multiple technical challenges and opportunities for the technical team. In this blog post, we cover some of the unique challenges we had to overcome when building weather.com and discuss how we ended up using Node.js to power our internationalized weather application.
Drupal ‘n Angular (DNA): The early days
In 2015, we were a Drupal ‘n Angular (DNA) shop. We unofficially pioneered the industry by marrying Drupal and Angular together to build a modular, content-based website. We used Drupal as our CMS to control content and page configuration, and we used Angular to code front-end modules.
Front-end modules were small blocks of user interfaces that had data and some interactive elements. Content editors would move around the modules to visually create a page and use Drupal to create articles about weather and publish it on the website.
DNA was successful in rapidly expanding the website’s content and giving editors the flexibility to create page content on the fly.
As our usage of DNA grew, we faced many technical issues which ultimately boiled down to three main themes:
Slower time for developers to fix, enhance, and deploy code (also known as velocity)
Our site suffered from poor performance, with sluggish load times and unreliable availability. This, in turn, directly impacted our ad revenue since a faster page translated into faster ad viewability and more revenue generation.
To address some of our performance concerns, we conducted different front-end experiments.
We analyzed and evaluated modules to determine what we could change. For example, we evaluated getting rid of some modules that were not used all the time or we rewrote modules so they wouldn’t use giant JS libraries.
We evaluated our usage of a tag manager in reference to ad serving performance.
Because of the fragile deployment process of using Drupal with Angular, our site suffered from too much downtime. The deployment process was a matter of taking the name of a git branch and entering it into a UI to get released into different environments. There was no real build process, but only version control.
Ultimately, this led to many bad practices that impacted developers including lack of version control methodology, non-reproduceable builds, and the like.
Slower developer velocity
The majority of our developers had front-end experience, but very few them were knowledgeable about the inner workings of Drupal and PHP. As such, features and bug fixes related to PHP were not addressed as quickly due to knowledge gaps.
Large deployments contributed to slower velocity as well as stability issues, where small changes could break the entire site. Since a deployment was the entire codebase (Drupal, Drupal plugins/modules, front-end code, PHP scripts, etc), small code changes in a release could easily get overlooked and not be properly tested, breaking the deployment.
Overall, while we had a few quick wins with DNA, the constant regressions due to the setup forced us to consider alternative paths for our architecture.
Rethinking our architecture to include Node.js
Stakeholders were happy with the lite experience, commenting on the nearly instantaneous page loads. Analyzing this proof-of-concept was important in determining our next steps in our architectural overhaul.
Differing from DNA, the lite experience:
Rendered pages as server side only
We used what we learned with the lite experience to help us and serve our website more performantly. This started with rethinking our DNA architecture.
Metrics to measure success
Before we worked on a new architecture, we had to show our business that a re-architecture was needed. The first thing we had to determine was what to measuring to show success.
We consulted with the Google Ad team to understand how exactly a high-performing webpage impacts business results. Google showed us proof that improving page speed increases ad viewability which translates to revenue.
With that in hand, each day we conducted tests across a set of pages to measure:
We used a variety of tools to collect our metrics: WebPageTest, Lighthouse, sitespeed.io.
As we compiled a list of these metrics, we were able to judge whether certain experiments were beneficial or not. We used our analysis to determine what needed to change in our architecture to make the site more successful.
While we intended to completely rewrite our DNA website, we acknowledged that we needed to stair step our approach for experimenting with a newer architecture. Using the above methodology, we created a beta page and A/B tested it to verify its success.
From Shark Tank to a beta of our architecture
Recognizing the performance of our original Node.js proof of concept, we held a “Shark Tank” session where we presented and defended different ideal architectures. We evaluated whole frameworks or combinations of libraries like Angular, React, Redux, Ember, lodash, and more.
From this experiment, we collectively agreed to move from our monolithic architecture to a Node.js backend and newer React frontend. Our timeline for this migration was between nine months to a year.
Ultimately, we decided to use a pattern of small JS libraries and tools, similar to that of a UNIX operating system’s tool chain of commands. This pattern gives us the flexibility to swap out one component from the whole application instead of having to refactor large amounts of code to include a new feature.
On the backend, we needed to decouple page creation and page serving. We kept Drupal as a CMS and created a way for documents to be published out to more scalable systems which can be read by other services. We followed the pattern of Backends for Frontends (BFF), which allowed us to decouple our page frontends and allow for more autonomy of our backend downstream systems. We use the documents published by the CMS to deliver pages with content (instead of the traditional method of the CMS monolith serving the pages).
Over time, we implemented and evolved our usage from our first project. After developing our first few pages, we decided to move away from ExpressJS to Koa to use newer JS standards like async/await. We started with pure React but switched to React-like Inferno.js.
After evaluating many different build systems (gulp, grunt, browserify, systemjs, etc), we decided to use Webpack to facilitate our build process. We saw Webpack’s growing maturity in a fast-paced ecosystem, as well as the pitfalls of its competitors (or lack thereof).
Webpack solved our core issue of DNA’s JS aggregation and minification. With a centralized build process, we could build JS code using a standardized module system, take advantage of the npm ecosystem, and minify the bundles (all during the build process and not during runtime).
Moving from client-side to server-side rendering of the application increased our speed index and got information to the user faster. React helped us in this aspect of universal rendering–being able to share code on both the frontend and backend was crucial to us for server-side rendering and code reuse.
Our first launch of our beta page was a Single Page App (SPA). Traditionally, we had to render each page and location as a hit back to the origin server. With the SPA, we were able to reduce our hits back to the origin server and improve the speed of rendering the next view thanks to universal rendering.
The following image shows how much faster the webpage response was after the SPA was introduced.
As our solution included more Node.js, we were able to take advantage of a lot of the tooling associated with a Node.js ecosystem, including ESLint for linting, Jest for testing, and eventually Yarn for package management.
Linting and testing, as well as a more refined CI/CD pipeline, helped reduce bugs in production. This led to a more mature and stable platform as a whole, higher engineering velocity, and increased developer happiness.
Changing deployment strategies
Recognizing our problems with our DNA deployments, we knew we needed a better solution for delivering code to infrastructure. With our DNA setup, we used a managed system to deploy Drupal. For our new solution, we decided to take advantage of newer, container-based deployment and infrastructure methodologies.
By moving to Docker and Kubernetes, we achieved many best practices:
Separating out disparate pages into different services reduces failures
Building stateless services allows for less complexity, ease of testing, and scalability
Builds are repeatable (Docker images ensure the right artifacts are deployed and consistent) Our Kubernetes deployment allowed us to be truly distributed across four regions and seven clusters, with dozens of services scaled from 3 to 100+ replicas running on 400+ worker nodes, all on IBM Cloud.
Addressing a familiar set of performance issues
After running a successful beta experiment, we continued down the path of migrating pages into our new architecture. Over time, some familiar issues cropped up:
Pages became heavier
Build times were slower
Developer velocity decreased
We had to evolve our architecture to address these issues.
Beta v2: Creating a more performant page
Our second evolution of the architecture was a renaissance (rebirth). We had to go back to the basics and revisit our lite experience and see why it was successful. We analyzed our performance issues and came to a conclusion that the SPA was becoming a performance bottleneck. Although SPA benefits second page visits, we came to an understanding that majority of our users visit the website and leave once they get their information.
We designed and built the solution without a SPA, but kept React hydration in order to keep code reuse across the server and client-side. We paid more attention to the tooling during development by ensuring that code coverage (the percentage of JS client code used vs delivered) was more efficient.
Removing the SPA overall was key to reducing build times as well. Since a page was no longer stitched together from a singular entry point, we split the Webpack builds so that individual pages can have their own set of JS and assets.
We were able to reduce our page weight even more compared to the Beta site. Reducing page weight had an overall impact on page load times. The graph below shows how speed index decreased.
Note: Some data was lost in between January through October of 2019.
This architecture is now our foundation for any and all pages on weather.com.
weather.com was not transformed overnight and it took a lot of work to get where we are today. Adding Node.js to our ecosystem required some amount of trial and error.
As we continue to architect, evolve, and expand our solution, we are always looking for ways to improve. Check out weather.com on your desktop, or for our newer/more performant version, check out our mobile web version on your mobile device.
Earlier this year, to help our community better understand ways to participate, as well to provide hosted projects ways to showcase what they are working on, I started hosting bi-weekly Open Office Hours.
The goal of Office Hours is to give members of our community a place to ask questions, get guidance on onboarding, and learn more about other projects in the Foundation. It has also served as a place for current projects to get connected to the wider OpenJS Foundation community and share key learnings.
LFW211 is a vendor-neutral training geared toward developers who wish to master and demonstrate creating Node.js applications. The course trains developers on a broad range of Node.js capabilities at depth, equipping them with rigorous foundational skills and knowledge that will translate to building any kind of Node.js application or library.
By the end of the course, participants:
Become skillful with Node.js debugging practices and tools
Efficiently interact at a high level with I/O, binary data and system metadata
Attain proficiency in creating and consuming ecosystem/innersource libraries
Ready to take the training? The course is available now. The $299 course fee – or $499 for a bundled offering of both the course and related certification exam – provides unlimited access to the course for one year to all content and labs. This course and exam, in addition to all Linux Foundation training courses and certification exams, are discounted 30% through May 31 by using code ANYWHERE30. Interested individuals may enroll here.
The creators of Node-RED recently gave an informative Ask Me Anything (AMA) which you can watch below. Node-RED is a Growth Project at the OpenJS Foundation. Speakers include Nick O’Leary (@knolleary), Dave Conway-Jones (@ceejay), and John Walicki (@johnwalicki).
This AMA can help individuals interested in Node-RED get a better understanding of the frameworking tool. Using a combination of user generated and preexisting questions, the discussion focuses heavily on the processes employed by the creators of Node-RED to optimize the tool.
The creators of Node-RED answered questions from the live chat, giving insight into how Node-RED is iterated and improved. Questions ranged from where Node-RED has gone in the last 7 years to whether or not Node-RED is a prototyping tool.
In a recent interview with DZone, Bethany Griggs, Node.js Technical Steering Committee member and Open-source Engineer at IBM gave some insight into the recent Node.js v14 release as well as the latest in Node.js overall. Topics covered include changes with the project pertaining to contributor onboarding, getting started in Node.js, challenges, and highlights of Node.js v14. Node.js is an impact project of the OpenJS Foundation.
Bethany has been a Node Core Collaborator for over two years. She contributes to the open-source Node.js runtime and is a member of the Node.js Release Working Group where she is involved with auditing commits for the long-term support (LTS) release lines and the creation of releases.
Bethany presents and runs workshops at international conferences, including at NodeSummit, London Node.js User Group, and NodeConfEU.
Congrats to the ESLint team on their most recent release, v7.0.0! This new release brings new updates including improved developer experience, core rule changes, new ESLint class, and much more.
ESLint v7.0.0 Highlights:
Dropping support for Node.js v8
Node.js 8 reached EOL in December 2019, and we are officially dropping support for it in this release.
Core rule changes
The ten Node.js/CommonJS rules in core have been deprecated and moved to the eslint-plugin-node plugin.
Several rules have been updated to recognize bigint literals and warn on more cases by default.
eslint:recommended has been updated with a few new rules: no-dupe-else-if, no-import-assign, and no-setter-return.
Improved developer experience
The default ignore patterns have been updated. ESLint will no longer ignore .eslintrc.js and bower_components/* by default. Additionally, it will now ignore nested node_modules directories by default.
ESLint will now lint files with extensions other than .js if they are explicitly defined in overrides.files – no need to use the --ext flag!
ESLint now supports descriptions in directive comments, so things like disable comments can now be clearly documented!
Additional validation has been added to the RuleTester class to improve testing custom rules in plugins.
ESLint will now resolve plugins relative to the entry configuration file. This means that shared configuration files that are located outside the project can now be colocated with the plugins they require.
Starting in ESLint v7, configuration files and ignore files passed to ESLint using the –config path/to/a-config and –ignore-path path/to/a-ignore CLI flags, respectively, will resolve from the current working directory rather than the file location. This allows for users to utilize shared plugins without having to install them directly in their project.
New ESLint class
The CLIEngine class provides a synchronous API that is blocking the implementation of features such as parallel linting, supporting ES modules in shareable configs/parsers/plugins/formatters, and adding the ability to visually display the progress of linting runs. The new ESLint class provides an asynchronous API that ESLint core will now using going forward. CLIEngine will remain in core for the foreseeable future but may be removed in a future major version.
An Interview with Daijiro Wachi, Node.js Core Collaborator
Daijiro is a Node.js Core Collaborator.
In his spare time, Daijiro helps kids and beginners learn coding at CoderDojo and NodeSchool.
Why did you want to get certified?
I chose certification as a way to continually maintain the big picture of fast growing software development in Node.js. To remain a strong problem solver, it needs to continue to acquire both knowledge and experience and my way of acquiring knowledge was through real experience mostly. This approach gives me deep knowledge related to the project. However, that means the expertise I can earn depends on the scope of the project. As both exam JSNAD and JSNSD requires a wide range of intermediate-level knowledge and hands-on to answer to the questions, I thought the exam would be the ideal candidate to validate and maintain my breadth of knowledge.
What's the value for you personally and professionally in getting certified?
Personally, passing the exam helped me to expand my network within the community. After taking the exam, I realized that the exam should provide translated versions to be open for everyone. Then I had a chance to ask the author of the exam, David Clements, to introduce me to Robin Ginn, and I got a chance to deliver the request. During the process, I asked how many people were interested in Japanese to investigate demand in Japan to support my request, and I received questions from people who were interested in the exam and warm words to support the request. It’s always good to connect with people who are interested in the same thing. I hope we can accelerate the translation together.
As a professional, I think there were two benefits. One is unlearning. I thought I knew most of Node.js API and how to use it, but I couldn’t get 100% marks in the exam. Then I decided to reread documents and blogs and set up my server to practice. That was something I could not touch while I am working on a big framework. The second is repetition. As this exam is still new and many people are interested, I had the chance to explain the certification exam both internally and externally. Through communication, people realize that I am a Node.js professional and what that means.
Would you recommend other Node.js developers taking the certification?
Yes, I definitely recommend it. One of the pros of the exam is their comprehension of the scope to effectively learn intermediate-level Node.js knowledge. Therefore, I think it can be recommended to anyone at any level (beginner, intermediate, advanced). For beginners, they can use it as their learning goal. They can use the exam’s scope as an order of their studies which is always difficult to design. For intermediates, they can use it to maintain and update their knowledge. Working in the same codebase for a long time often leaves them with fewer chances to learn something outside the scope of the project, so they tend to forget the things they learned in the past. For advanced developers, it can be used to understand the range to be covered when coaching beginner/intermediate level developers. They can see how things that are natural for them are challenging for others.
On the other hand, there are some cons. The big blocker would be the price. I purchased it cheaply using Cyber Monday Sales. If you can negotiate with your employer, I recommend talking to your boss about the above benefits and have them become supporters.
How did you prepare? What advice would you give someone considering taking the exam?
The preparation was challenging as there was very little information about the exam on the web yet, and the official website only mentions Domains & Competencies. The online exam was not even familiar to me, and the process seems to be different from the other online exams that I have taken before. So I used the first time to experience how the exam works and find out what the actual scope is. Then, I immediately retook it by using the free-retake after understanding how to answer properly. The content of the questions themselves were fine for me.
What do you want to do next?
As Node.js has opened my door to the world, I want to give back the same to people in the community. Promoting this exam with problem-solving would be one of the ways to achieve that. Currently, I think that the value of this exam has not been promoted enough, at least in Japan, due to lack of awareness, language barrier, price, and so on. As a volunteer, I will try to contribute to The OpenJS Foundation to solve the problems from my perspective, and I hope this can help nurture more software engineers in the world. Don’t you think it is very exciting to be involved in a kind of “Digital Transformation” at this scale?
 Editor’s note: a 30% discount on Node.js Certification has been extended through May 31, 2020. Use promo code ANYWHERE30 to get your discount.
OpenJS Node.js Application Developer (JSNAD)
The OpenJS Node.js Application Developer certification is ideal for the Node.js developer with at least two years of experience working with Node.js.
Get more information and enroll »
OpenJS Node.js Services Developer
The OpenJS Node.js Services Developer certification is for the Node.js developer with at least two years of experience creating RESTful servers and services with Node.js.
The OpenJS Foundation is excited to announce the full schedule of keynote speakers, sessions and workshops for OpenJS World, the Foundation’s annual global conference. From June 23 to 24, developers, software architects, engineers, and other community members from OpenJS Foundation hosted projects such as AMP, Dojo, Electron, and Node.js will tune in to network, learn and collaborate.
Due to continuing COVID-19 safety concerns, OpenJS World 2020 will now take place as a free virtual experience, at the same dates and time: June 23 – June 24 on the US Central Time Zone. If you have already registered and paid, we will be in touch with you about your refund.
Today we are excited to announce keynote speakers, sessions and hands-on workshops that will be joining us at OpenJS World!
Chronicles of the Node.jsEcosystem: The Consumer, The Author, and The Maintainer – Bethany Griggs, Open Source Engineer and Node.js TSC Member, IBM
Fighting Impostor Syndrome with the Internet of Things – Tilde Thurium, Developer Evangelist, Twilio
From Streaming to Studio – The Evolution of Node.js at Netflix – Guilherme Hermeto, Senior Platform Engineer at Netflix
Hint, Hint!: Best Practices for Web Developers with webhint – Rachel Simone Weil, Edge DevTools Program Manager, Microsoft
User-Centric Testing for 2020: How Expedia Modernized its Testing for the Web – Tiffany Le-Nguyen, Software Development Engineer, ExpediaGroup
The conference covers a range of topics for developers and end-users alike including frameworks, security, serverless, diagnostics, education, IoT, AI, front-end engineering, and much more.
Interested in participating online in OpenJS World? Register now.
Also, sponsorships for this year’s event are available now. If you are interested in sponsoring, check out the event prospectus for details and benefits.
For new and current contributors, maintainers, and collaborators to the Foundation, we are hosting the OpenJS Foundation Collaborator Summit on June 22, 25 and 26th. This event is an ideal time for people interested or working on projects to share, learn, and get to know each other. Learn more about registering for the OpenJS Collaborator Summit.
Thank you to the OpenJS World program committee for their tireless efforts in bringing in and selecting top tier keynote speakers and interesting and informative sessions. We are honored to work with such a dedicated and supportive community!