Category

Node.js

How Node.js saved the U.S. Government $100K

By Blog, Case Study, Node.js, OpenJS World

The following blog is based on a talk given at the OpenJS Foundation’s annual OpenJS World event and covers solutions created with Node.js.

When someone proposes a complicated, expensive solution ask yourself: can it be done cheaper, better and/or faster? Last year, an external vendor wanted to charge $103,000 to create an interactive form and store the responses. Ryan Hillard, Systems Developer at the U.S. Small Business Administration, was brought in to create a less expensive, low-maintenance alternative to the vendor’s proposal. Hillard was able to create a solution using ~320 lines of code and $3000. In the talk below Hillard describes what the difficulties were and how his Node.js solution fixed the problem.

Last year, Hillard started work on a government’s case management system that received and processed feedback from external and internal users. Unfortunately, a recent upgrade and rigorous security measures prevented external users from leaving feedback. Hillard needed to create a secure interactive form and then store the data. However, the solution also needed to be cheap, easy to maintain and stable. 

Hillard decided to use three common services: Amazon Simple Storage Service (S3), Amazon Web Services (AWS) Lambda and Node.js. Together, these pieces provided a simple and versatile way to capture and then store response data. Maintenance is low because the servers are maintained by Amazon. Additionally, future developers can easily alter and improve the process as all three services/languages are commonly used. 

To end his talk, Hillard discussed the design and workflow processes that led him to his solution. He compares JavaScript to a giant toolkit with hundreds of libraries and dependencies — a tool for every purpose. However, this variety can be counterproductive as the complexity – and thus the management time – increases.

Developers should ask themselves how they can solve their problems without introducing anything new. In other words, size does matter — the smallest, simplest toolkit is the best!

OpenJS Foundation AMA: Node.js Certifications

By AMA, Blog, Certification, Node.js

In this AMA, we discussed the benefits of the OpenJS Node.js certification program. The certification tests a developer’s knowledge of Node.js and allows them to quickly establish their credibility and value in the job market. Robin Ginn, OpenJS Foundation Executive Director, served as the moderator. David Clements, Technical Lead of OpenJS Certifications, and Adrian Estrada, VP of Engineering at NodeSource, answered questions posed by the community. The full AMA is available at the link below: 

The OpenJS Foundation offers two certifications: OpenJS Node.js Application Developer (JSNAD) and OpenJS Node.js Services Developer (JSNSD). The Application Developer certification tests general knowledge of Node.js (file systems, streams etc.). On the other hand, the Services Developer certification asks developers to create basic Node services that might be required by a startup or enterprise. Services might include server setup and developing security to protect against malicious user input. 

In the talk, Clements and Estrada discussed why they created the certifications. They wanted to create an absolute measure of practical skill to help developers stand out and ease the difficulties of hiring for the industry. To that end, OpenJS certifications are relatively cheap and applicable to real world problems encountered in startup and enterprise environments. 

A timestamped summary of the video is available below: 

Note: If you are not familiar with the basics of the two certifications offered by the OpenJS Foundation, jumping to the two bolded sections may be a good place to start.

AMA Topics

Introductions 0:20

How did the members start working together? 2:35

How did work on the certifications start? 5:07

Is it possible to have feedback on the exam? 9:50

Applications of psychometric analysis 12:26

What is the Node.js Application Developer certification + Services Developer certification? 14:54

How do you take the exam? What should you expect? 18:22

Will there be differential pricing between countries? 22:04

How is the criteria for new npm packages chosen? 24:55

Are test takers able to use Google or mdn? 31:52

What benefits do OpenJS certifications have for developers? 33:22

How to use the certification after completion 39:43

What are the exam principles? 40:56

How much experience is required for the exam? 44:12 

Course available in Chinese 49:09

How will new Node versions affect the certifications? 53:43 

Closing thoughts 56:35

Node.js announces new mentorship opportunity

By Blog, Node.js

This post was written by A.A. Sobaki and the Node.js Mentorship Initiative. It first appeared on the project’s blog. Node.js is an Impact Project at the OpenJS Foundation.

The Node.js Mentorship Initiative is excited to announce a new mentee opening! We’d like to invite experienced developers to apply to join the Node.js Examples Initiative.

If you’re not familiar, the Examples Initiative’s mission is to build and maintain a repository of runnable, tested examples that go beyond “hello, world!” This is an important place to find practical and real-world examples of how to use the runtime in production.

Being a part of the Examples Initiative is a big opportunity. As a mentee, you will work with and learn from industry leaders and world-class software engineers. You will receive personalized guidance as you write code that will serve as a template for countless developers as they begin to use Node.js in their projects.

To get started, complete the application and coding challenge linked below. The coding challenge is a chance to showcase your skills. It is estimated it will take between 2–4 hours to complete the challenge.

Click the link to get started.

We look forward to receiving your application.

Node.js Promise reject use case survey

By Blog, Node.js, Survey

This post was contributed by the Node.js Technical Steering Committee.

The Node.js Project, an impact project of the OpenJS Foundation, handles unhandled rejections by emitting a deprecation warning to stderr. The warning shows the stack where the rejection happened, and states that in future Node.js versions unhandled rejections will result in Node.js exiting with non-zero status code. We intend to remove the deprecation warning, replacing it with a stable behavior which might be different from the one described on the deprecation warning. We’re running a survey to better understand how Node.js users are using Promises and how they are dealing with unhandled rejections today, so we can make an informed decision on how to move forward.

To learn more about what unhandled rejections are and potential issues with it, check out the original post.  Those interested in helping the TSC solve this are encouraged to participate in the survey, which will close on August 24th. 

OpenJS Node.js Certification Exams Now Available in Chinese

By Blog, Certification, Node.js

We are thrilled to share that the OpenJS Node.js Application Developer (JSNAD) and OpenJS Node.js Services Developer (JSNSD) certification exams are now available in Chinese! 

China holds one of the largest populations of Node.js users in the world and the newly translated exams broadens the availability of these certifications internationally. We are pleased to announce that the two virtual exams have been translated into Chinese, and are now available, along with a native-speaking proctor.

Launched in October 2019 by the Linux Foundation and the OpenJS Foundation, the Node.js certification exams have become a highly sought after credential for web application and services developers around the world. For developers looking to showcase their own skill sets this performance-based, verifiable certification exam helps instill confidence and provides a straightforward way for potential employers to validate that a candidate possesses the necessary skills to be successful. 

The JSNAD certification is designed for anyone looking to demonstrate competence with Node.js to create applications of any kind, with a focus on knowledge of Node.js core APIs. The JSNSD certification is designed for anyone looking to demonstrate competence in creating RESTful Node.js Servers and Services (or Microservices) with a particular emphasis on security practices. Both exams are conducted online with remote proctoring, take two hours to complete, and are performance-based, meaning test takers perform tasks and solve problems in real-world situations. 

The exam content was developed in partnership with NearForm and NodeSource. The OpenJS Foundation would like to offer thanks to Khaidi Chu, a Node.js project collaborator from the Node.js infrastructure team at Alibaba who helped with the translations. 

We also offer a prep course for the JSNAD exam, although this is currently only in English. A bundle of the English course and Chinese JSNAD exam is available. 

Learn more about all Chinese-language certifications and exams offered by the Linux Foundation at https://training.linuxfoundation.cn.

Getting Certified: How and Why

By Blog, Certification, Node.js, OpenJS World

During OpenJS World, Luca Maraschi, Chief Architect, Telus Digital, sat down with David Mark Clements, Tech lead/primary author of OpenJS Foundation JSNAD & JSNSD Certifications, for an in-depth interview on getting certified with Node.js. The interview spans questions from why there are two certifications to how the certification process has been impacted by COVID. This Q&A can help serve individuals looking to get their Node certification but who are unsure where to begin. You can watch the full interview below. 

Full Video Here

Introductions (0:00)

Getting Started (1:00)

2 Certifications (2:00)

Who should take which exam? (3:30)

What Should People Read Before Trying To Get Certified? (5:30)

How To Train During And Beyond COVID? (8:15)

Do You See A Future Where We Stay Adapted? (14:00)

How Do You See Day-to-Day Enterprise Being Impacted (16:45)

Your POV on Workshops (23:20)

How Does Certification Impact Community? (27:00)

How Can People Contribute? (29:30)

Learn more about Node.js Certification today.

Node.js Security Working Group AMA Recap

By AMA, Blog, Node.js

Members of the Node.js Security Working Group recently answered questions regarding what their group does, and how the security of Node.js can be improved. The Node.js Security Working Group is a community driven project that investigates security reports to reduce the vulnerability of the Node.js ecosystem. Liran Tal, a senior developer advocate at Snyk served as the moderator for the AMA series. Vladimir de Turckheim, a developer at Sqreen, and Michael Dawson, IBM community lead for Node.js, answered questions and discussed topics posed by viewers of the live stream. 

Full panel discussion available here 

The content of the AMA generally fell into two categories: a discussion of how the Security  Working Group functions or a response to a viewer’s security concern. The talk began with a discussion of “bug bounties” — monetary rewards given to developers who report potential security vulnerabilities in the Node.js ecosystem. Dawson and Turckheim discussed problems with this system, challenges that the working group has faced and the future of the group in a changing landscape. 

Dawson and Turckheim also addressed security concerns that viewers of the live stream had. The questions spanned a wide range of topics, from identifying security risks to using https to protect online data. Finally, the members of the panel reviewed how people can join working groups. They admitted that working groups take up a lot of time, but are a good way to give back to the community, and meet people who share similar interests. 

A summary of the video is available below: 

Panel starts 0:25

Member introductions 1:33

What is the Node.js Security Working Group? 04:52

How the members got started, what current members do (Vladimir) 10:18

Bug Bounties 12:13

Where can you find the Security Working Group? 14:43

How Bug Bounties can create tension 15:21

Potential alterations to the Bug Bounty system 19:19

How to roll out a patch to the Node.js ecosystem 23:04

Challenges that the Security Working Group faces 24:19

Interactions with the OpenJS and larger Node.js community 33:58

Using pattern searches to detect security issues early 40:09

How to secure JSON data transmissions 43:30

Should a best practice security guide be created? 46:50

Are malicious modules as common as they used to be? 50:30 

How to tell between unintended bugs and malicious modules 53:35 

Closing thoughts 55:28  

OpenJS World – Featured Profile – Beth Griggs

By Announcement, Blog, Event, Node.js, OpenJS World

Since 2016, Beth Griggs has been working as an Open Source Engineer at IBM where she focuses on the Node.js runtime. Node.js is an impact project in the OpenJS Foundation. Beth is a Node.js Technical Steering Committee Member and a member of the Node.js Release Working Group where she is involved with auditing commits for the long-term support (LTS) release lines and the creation of releases. 

What was your first experience of Node.js?

I joined the party a little late, my first experience of Node.js was while completing my final-year engineering project for my Bachelor’s degree in 2016. My engineering project was to create a ‘living meta-analysis’ tool that would enable researchers, specifically psychologists, to easily combine and update findings from related independent studies. I originally implemented the tool using a PHP framework, but after some time I realized I wasn’t enjoying the developer experience and hitting limitations with the framework. Half-way through my final year of university, I heard some classmates raving about Node.js, so I decided to check it out. Within a few weeks, I had reimplemented my project from scratch using Node.js.

How did you start contributing to Node.js?

I rejoined IBM in 2016, having spent my gap-year prior to university at IBM as Java Test Engineer in their WebSphere organization. I joined the Node.js team in IBM Runtime Technologies who at the time were responsible for building and testing the IBM SDK for Node.js. From running the Node.js test suite regularly internally, my team identified flaky tests that needed fixing out in the community – which turned in to some of my first contributions to Node.js core.

Over the next few years, our team deprecated the IBM SDK for Node.js in favor of maintaining these platforms directly in the Node.js community.  Around the same time, Myles Borins offered to mentor me to become involved with the Release Working Group, with a view of becoming a Node.js releaser (Thanks Myles!). Since then, that’s the area of Node.js where most of my contributions have been focused.

What has changed since you first started to contributing to Node.js?

One of the biggest changes is the emphasis on onboarding new contributors to major parts of the project. Getting new names and faces onboarded in a position where they can actively contribute to Node.js, and also an increase in socializing how people can contribute in ways other than code. 

Documentation of the internal contributor processes has improved a lot too, but there’s still room to improve.

What are you most excited about with the Node.js project at the moment?

I’m really enjoying the work that is happening in pkgjs GitHub organization where we’re building tools for package maintainers. I’m excited to see the tools that come out of pkgjs organization and the Node.js Package Maintenance team.

What are you most looking forward to at OpenJS World?

There are so many great talks (although, I’m a little bias as I was in the content team). I’m really looking forward to the keynote with Christina H Koch, a NASA astronaut. And also, the ‘Broken Promises’ workshop by James and Matteo from NearForm.

On the Cross Project Summit day, I’m looking forward to the Node.js Package Maintenance session. We’ve got a lot of momentum in that working group at the moment and it’ll be great to have input from the other OpenJS projects. I’m hoping my talk “Chronicles of the Node.js Ecosystem: The Consumer, The Author, and The Maintainer” is a good primer for the session. 

I’ll also be at the IBM virtual booth throughout the conference and catching my colleagues’ talks (https://developer.ibm.com/technologies/node-js/blogs/ibm-at-openjs-world-2020). 

What does your role at IBM include other than contributing to the Node.js community?

A wide variety of things really, no week is ever full of the same tasks. I’m often preparing talks and workshops for various conferences. Alongside that, I spend my time researching common methods and best practices for deploying Node.js applications to the cloud – specifically focusing on IBM Cloud and OpenShift. I often find myself assisting internal teams with their usage of Node.js, and analyzing various IBM offerings from a typical Node.js Developer’s point of view and providing feedback. I’m also scrum master for my team, so a portion of my time is taken up with those responsibilities too. 

What do you do outside of work?

Most often hanging out with my dog, Laddie. I’m a DIY enthusiast – mainly painting or upcycling various pieces of second-hand furniture. Since the start of lockdown in the UK, I have also been writing a book which is a convenient pass time. Big fan of replaying my old PS1 games too. 

Where should people go to get started contributing to the Node.js Project? 

Go to https://www.nodetodo.org/, which is a website that walks you through a path towards your first contribution to Node.js. As long as you’re a little bit familiar with Node.js, you can start here. The other option is to look for labels on repositories in the Node.js GitHub organization tagged with ‘Good first issue’. 

Alternatively, you can join one of our working group sessions on Zoom and start participating in discussions. The sessions are listed in the nodejs.org calendar. If you’re specifically interested in the Node.js Release Working Group, I run fortnightly mentoring/shadowing sessions that you’re welcome to join.

How The Weather Company uses Node.js in production

By Announcement, Blog, ESLint, member blog, Node.js

Using Node.js improved site speed, performance, and scalability

This piece was written by By Noel Madali and originally appeared on the IBM Developer Blog. IBM is a member of the OpenJS Foundation.

The Weather Company uses Node.js to power their weather.com website, a multinational weather information and news website available in 230+ locales and localized in about 60 languages. As an industry leader in audience reach and accuracy, weather.com delivers weather data, forecasts, observations, historical data, news articles, and video.

Because weather.com offers a location-based service that is used throughout the world, its infrastructure must support consistent uptime, speed, and precise data delivery. Scaling the solution to billions of unique locations has created multiple technical challenges and opportunities for the technical team. In this blog post, we cover some of the unique challenges we had to overcome when building weather.com and discuss how we ended up using Node.js to power our internationalized weather application.

Drupal ‘n Angular (DNA): The early days

In 2015, we were a Drupal ‘n Angular (DNA) shop. We unofficially pioneered the industry by marrying Drupal and Angular together to build a modular, content-based website. We used Drupal as our CMS to control content and page configuration, and we used Angular to code front-end modules.

Front-end modules were small blocks of user interfaces that had data and some interactive elements. Content editors would move around the modules to visually create a page and use Drupal to create articles about weather and publish it on the website.

DNA was successful in rapidly expanding the website’s content and giving editors the flexibility to create page content on the fly.

As our usage of DNA grew, we faced many technical issues which ultimately boiled down to three main themes:

  • Poor performance
  • Instability
  • Slower time for developers to fix, enhance, and deploy code (also known as velocity)

Poor performance

Our site suffered from poor performance, with sluggish load times and unreliable availability. This, in turn, directly impacted our ad revenue since a faster page translated into faster ad viewability and more revenue generation.

To address some of our performance concerns, we conducted different front-end experiments.

  • We analyzed and evaluated modules to determine what we could change. For example, we evaluated getting rid of some modules that were not used all the time or we rewrote modules so they wouldn’t use giant JS libraries.
  • We evaluated our usage of a tag manager in reference to ad serving performance.
  • We added lazy-loaded modules to remove the module on first load in order to reduce the amount of JavaScript served to the client.

Instability

Because of the fragile deployment process of using Drupal with Angular, our site suffered from too much downtime. The deployment process was a matter of taking the name of a git branch and entering it into a UI to get released into different environments. There was no real build process, but only version control.

Ultimately, this led to many bad practices that impacted developers including lack of version control methodology, non-reproduceable builds, and the like.

Slower developer velocity

The majority of our developers had front-end experience, but very few them were knowledgeable about the inner workings of Drupal and PHP. As such, features and bug fixes related to PHP were not addressed as quickly due to knowledge gaps.

Large deployments contributed to slower velocity as well as stability issues, where small changes could break the entire site. Since a deployment was the entire codebase (Drupal, Drupal plugins/modules, front-end code, PHP scripts, etc), small code changes in a release could easily get overlooked and not be properly tested, breaking the deployment.

Overall, while we had a few quick wins with DNA, the constant regressions due to the setup forced us to consider alternative paths for our architecture.

Rethinking our architecture to include Node.js

Our first foray into using Node.js was a one-off project for creating a lite experience for weather.com that was completely server-side rendered and had minimal JavaScript. The audience had limited bandwidth and minimal device capabilities (for example, low-end smartphones using Facebook’s Free Basics).

Stakeholders were happy with the lite experience, commenting on the nearly instantaneous page loads. Analyzing this proof-of-concept was important in determining our next steps in our architectural overhaul.

Differing from DNA, the lite experience:

  • Rendered pages as server side only
  • Kept the front-end footprint under 30KB (virtually no JavaScript, little CSS, few images).

We used what we learned with the lite experience to help us and serve our website more performantly. This started with rethinking our DNA architecture.

Metrics to measure success

Before we worked on a new architecture, we had to show our business that a re-architecture was needed. The first thing we had to determine was what to measuring to show success.

We consulted with the Google Ad team to understand how exactly a high-performing webpage impacts business results. Google showed us proof that improving page speed increases ad viewability which translates to revenue.

With that in hand, each day we conducted tests across a set of pages to measure:

  • Speed index
  • Time to first interaction
  • Bytes transferred
  • Time to first ad call

We used a variety of tools to collect our metrics: WebPageTest, Lighthouse, sitespeed.io.

As we compiled a list of these metrics, we were able to judge whether certain experiments were beneficial or not. We used our analysis to determine what needed to change in our architecture to make the site more successful.

While we intended to completely rewrite our DNA website, we acknowledged that we needed to stair step our approach for experimenting with a newer architecture. Using the above methodology, we created a beta page and A/B tested it to verify its success.

From Shark Tank to a beta of our architecture

Recognizing the performance of our original Node.js proof of concept, we held a “Shark Tank” session where we presented and defended different ideal architectures. We evaluated whole frameworks or combinations of libraries like Angular, React, Redux, Ember, lodash, and more.

From this experiment, we collectively agreed to move from our monolithic architecture to a Node.js backend and newer React frontend. Our timeline for this migration was between nine months to a year.

Ultimately, we decided to use a pattern of small JS libraries and tools, similar to that of a UNIX operating system’s tool chain of commands. This pattern gives us the flexibility to swap out one component from the whole application instead of having to refactor large amounts of code to include a new feature.

On the backend, we needed to decouple page creation and page serving. We kept Drupal as a CMS and created a way for documents to be published out to more scalable systems which can be read by other services. We followed the pattern of Backends for Frontends (BFF), which allowed us to decouple our page frontends and allow for more autonomy of our backend downstream systems. We use the documents published by the CMS to deliver pages with content (instead of the traditional method of the CMS monolith serving the pages).

Even though Drupal and PHP can render server-side, our developers were more familiar with JavaScript, so using Node.js to implement isomorphic (universal) rendering of the site increased our development velocity.

flow

Developing with Node.js was an easy focus shift for our previous front-end oriented developers. Since the majority of our developers had a primarily JavaScript background, we stayed away from solutions that revolved around separate server-side languages for rendering.

Over time, we implemented and evolved our usage from our first project. After developing our first few pages, we decided to move away from ExpressJS to Koa to use newer JS standards like async/await. We started with pure React but switched to React-like Inferno.js.

After evaluating many different build systems (gulp, grunt, browserify, systemjs, etc), we decided to use Webpack to facilitate our build process. We saw Webpack’s growing maturity in a fast-paced ecosystem, as well as the pitfalls of its competitors (or lack thereof).

Webpack solved our core issue of DNA’s JS aggregation and minification. With a centralized build process, we could build JS code using a standardized module system, take advantage of the npm ecosystem, and minify the bundles (all during the build process and not during runtime).

Moving from client-side to server-side rendering of the application increased our speed index and got information to the user faster. React helped us in this aspect of universal rendering–being able to share code on both the frontend and backend was crucial to us for server-side rendering and code reuse.

Our first launch of our beta page was a Single Page App (SPA). Traditionally, we had to render each page and location as a hit back to the origin server. With the SPA, we were able to reduce our hits back to the origin server and improve the speed of rendering the next view thanks to universal rendering.

The following image shows how much faster the webpage response was after the SPA was introduced.

flow

As our solution included more Node.js, we were able to take advantage of a lot of the tooling associated with a Node.js ecosystem, including ESLint for linting, Jest for testing, and eventually Yarn for package management.

Linting and testing, as well as a more refined CI/CD pipeline, helped reduce bugs in production. This led to a more mature and stable platform as a whole, higher engineering velocity, and increased developer happiness.

Changing deployment strategies

Recognizing our problems with our DNA deployments, we knew we needed a better solution for delivering code to infrastructure. With our DNA setup, we used a managed system to deploy Drupal. For our new solution, we decided to take advantage of newer, container-based deployment and infrastructure methodologies.

By moving to Docker and Kubernetes, we achieved many best practices:

  • Separating out disparate pages into different services reduces failures
  • Building stateless services allows for less complexity, ease of testing, and scalability
  • Builds are repeatable (Docker images ensure the right artifacts are deployed and consistent) Our Kubernetes deployment allowed us to be truly distributed across four regions and seven clusters, with dozens of services scaled from 3 to 100+ replicas running on 400+ worker nodes, all on IBM Cloud.

Addressing a familiar set of performance issues

After running a successful beta experiment, we continued down the path of migrating pages into our new architecture. Over time, some familiar issues cropped up:

  • Pages became heavier
  • Build times were slower
  • Developer velocity decreased

We had to evolve our architecture to address these issues.

Beta v2: Creating a more performant page

Our second evolution of the architecture was a renaissance (rebirth). We had to go back to the basics and revisit our lite experience and see why it was successful. We analyzed our performance issues and came to a conclusion that the SPA was becoming a performance bottleneck. Although SPA benefits second page visits, we came to an understanding that majority of our users visit the website and leave once they get their information.

We designed and built the solution without a SPA, but kept React hydration in order to keep code reuse across the server and client-side. We paid more attention to the tooling during development by ensuring that code coverage (the percentage of JS client code used vs delivered) was more efficient.

Removing the SPA overall was key to reducing build times as well. Since a page was no longer stitched together from a singular entry point, we split the Webpack builds so that individual pages can have their own set of JS and assets.

We were able to reduce our page weight even more compared to the Beta site. Reducing page weight had an overall impact on page load times. The graph below shows how speed index decreased.

flow Note: Some data was lost in between January through October of 2019.

This architecture is now our foundation for any and all pages on weather.com.

Conclusion

weather.com was not transformed overnight and it took a lot of work to get where we are today. Adding Node.js to our ecosystem required some amount of trial and error.

We achieved success by understanding our issues, collecting metrics, and implementing and then reimplementing solutions. Our journey was an evolution. Not only was it a change to our back end, but we had to be smarter on the front end to achieve the best performance. Changing our deployment strategy and infrastructure allowed us to achieve multiple best practices, reduce downtimes, and improve overall system stability. JavaScript being used in both the back end and front end improved developer velocity.

As we continue to architect, evolve, and expand our solution, we are always looking for ways to improve. Check out weather.com on your desktop, or for our newer/more performant version, check out our mobile web version on your mobile device.

New Node.js Training Course Supports Developers in their Certification, Technical and Career Goals

By Announcement, Blog, Certification, Node.js

Last October, the OpenJS Foundation in partnership with The Linux Foundation, released two Node.js certification exams to better support Node.js developers through showcasing their skills in the JavaScript framework. Today, we are thrilled to unveil the next phase of the OpenJS certification and training program with a new training course, LFW211 – Node.js Application Development.

LFW211 is a vendor-neutral training geared toward developers who wish to master and demonstrate creating Node.js applications. The course trains developers on a broad range of Node.js capabilities at depth, equipping them with rigorous foundational skills and knowledge that will translate to building any kind of Node.js application or library.

By the end of the course, participants:

  • Understand foundational essentials for Node.js and JavaScript development
  • Become skillful with Node.js debugging practices and tools
  • Efficiently interact at a high level with I/O, binary data and system metadata
  • Attain proficiency in creating and consuming ecosystem/innersource libraries

Node.js Application Development also will help prepare those planning to take the OpenJS Node.js Application Developer certification exam. A bundled offering including access to both the training course and certification exam is also available.

Thank you to David Clements who developed this key training. Dave is a Principal Architect, public speaker, author of the Node Cookbook, and open source creator specializing in Node.js and browser JavaScript. David is also one of the technical leads and authors of the official OpenJS Node.js Application Developer Certification and OpenJS Node.js Services Developer Certification.

Node.js is one of the most popular JavaScript frameworks in the world. It powers hundreds of thousands of websites, including some of the most popular like Google, IBM, Microsoft and Netflix. Individual developers and enterprises use Node.js to power many of their most important web applications, making it essential to maintain a stable pool of qualified talent.

Ready to take the training? The course is available now. The $299 course fee – or $499 for a bundled offering of both the course and related certification exam – provides unlimited access to the course for one year to all content and labs. This course and exam, in addition to all Linux Foundation training courses and certification exams, are discounted 30% through May 31 by using code ANYWHERE30. Interested individuals may enroll here.