Category

Blog

OpenJS World Announces Full Schedule

By Announcement, Blog, OpenJS World

Join the open source JavaScript community at OpenJS Foundation’s free virtual global conference

The OpenJS Foundation is excited to announce the full schedule of keynote speakers, sessions and workshops for OpenJS World, the Foundation’s annual global conference. From June 23 to 24, developers, software architects, engineers, and other community members from OpenJS Foundation hosted projects such as AMP, Dojo, Electron, and Node.js will tune in to network, learn and collaborate. 

We will also use this time to celebrate the 25th anniversary of JavaScript. OpenJS World will showcase several key JavaScript contributors, many of whom will be leading JavaScript into the next 25 years.

Due to continuing COVID-19 safety concerns, OpenJS World 2020 will now take place as a free virtual experience, at the same dates and time: June 23 – June 24 on the US Central Time Zone. If you have already registered and paid, we will be in touch with you about your refund.

The conference will include inspiring keynotes, informative presentations, and hands-on workshops that are aimed to help the OpenJS community better understand the latest and greatest of JavaScript technologies. 

Today we are excited to announce keynote speakers, sessions and hands-on workshops that will be joining us at OpenJS World! 

Keynote speakers

Session Highlights Include

  • Chronicles of the Node.jsEcosystem: The Consumer, The Author, and The Maintainer – Bethany Griggs, Open Source Engineer and Node.js TSC Member, IBM
  • Deno, a Secure Runtime for JavaScript and TypeScript – Ryan Dahl, Engineer, Deno Land
  • Fighting Impostor Syndrome with the Internet of Things – Tilde Thurium, Developer Evangelist, Twilio
  • From Streaming to Studio – The Evolution of Node.js at Netflix – Guilherme Hermeto, Senior Platform Engineer at Netflix
  • Hint, Hint!: Best Practices for Web Developers with webhint – Rachel Simone Weil, Edge DevTools Program Manager, Microsoft
  • Machine Learning for JavaScript Developers 101 –  Jason Mayes, Senior Developer Advocate for TensorFlow.js, Google
  • User-Centric Testing for 2020: How Expedia Modernized its Testing for the Web – Tiffany Le-Nguyen, Software Development Engineer, ExpediaGroup

The conference covers a range of topics for developers and end-users alike including frameworks, security, serverless, diagnostics, education, IoT, AI, front-end engineering, and much more. 

Interested in participating online in OpenJS World? Register now

Also, sponsorships for this year’s event are available now. If you are interested in sponsoring, check out the event prospectus for details and benefits. 

For new and current contributors, maintainers, and collaborators to the Foundation, we are hosting the OpenJS Foundation Collaborator Summit on June 22, 25 and 26th. This event is an ideal time for people interested or working on projects to share, learn, and get to know each other. Learn more about registering for the OpenJS Collaborator Summit. 

Thank you to the OpenJS World program committee for their tireless efforts in bringing in and selecting top tier keynote speakers and interesting and informative sessions. We are honored to work with such a dedicated and supportive community!

Project Update: Node.js version 14 available now

By Blog, Node.js, Project Update

This blog was written by Michael Dawson and Bethany Griggs, with additional contributions from the Node.js Community Committee and the Node.js Technical Steering Committee. This post initially appeared on the Node.js Blog. Node.js is an Impact Project of the OpenJS Foundation.

We’re excited to announce that Node.js 14 was released today! The highlights in this release include improved diagnostics, an upgrade of V8, an experimental Async Local Storage API, hardening of the streams APIs, removal of the Experimental Modules warning, and the removal of some long deprecated APIs.

Node.js 14 replaces Node.js 13 as our current release line. As per the release schedule (https://github.com/nodejs/Release#release-schedule), Node.js 14 will be the `Current` release for the next 6 months, and then promoted to Long-term Support (LTS) in October 2020. As always, corporate users should wait to upgrade their production deployments until October when Node.js is promoted to LTS. However, now is the best time to start testing applications with Node.js 14, and try out new features.

As a reminder — both Node.js 12 and Node.js 10 will remain in long-term support until April 2022 and April 2021 respectively (more details on the LTS strategy here).

Get started now! Learn how to download the latest version here: https://nodejs.org/en/download/current/

Before we dive into the features highlighted for this release, it’s important to note that new features added to the master flow quickly into the current release. This means that significant features become available in minor releases without too much fanfare. We’d like to take this opportunity to highlight some of those in the Node.js 14 release even though they may already have been backported to earlier releases.

Diagnostic Report goes Stable

The diagnostic report will be released as a stable feature in Node.js 14 (it was added as an experimental feature in Node.js 12). This is an important step in the ongoing work within the project to improve and build up the diagnostics available when using Node.js and the ease with which they can be used, with much of this work is pushed forward by the Node.js Diagnostics Working Group.

The diagnostic report feature allows you to generate a report on demand or when certain events occur. This report contains information that can be useful to help diagnose problems in production including crashes, slow performance, memory leaks, high CPU usage, unexpected errors and more. For more information about the diagnostic report feature, see https://medium.com/the-node-js-collection/easily-identify-problems-in-node-js-applications-with-diagnostic-report-dc82370d8029. As a stable feature there will be one less command-line option needed to enable Diagnostic reports and it should be easier for users to enable it in production environments.

V8 upgraded to V8 8.1

As always a new version of the V8 JavaScript engine brings performance tweaks and improvements as well as keeping Node.js up with the ongoing improvements in the language and runtime. This time we also have some naming fun with it being version 8 of V8 (“V8 of V8”).

Highlights of the new JavaScript features include:

  • Optional Chaining — MDN
  • Nullish Coalescing — MDN
  • Intl.DisplayNames  — MDN
  • Enables calendar and numberingSystem options for Intl.DateTimeFormat — MDN

For more information about the new features in V8 checkout the Node.js V8 blog: https://v8.dev/blog.

Experimental Async Local Storage API

The project has been working on APIs to help manage context across Asynchronous Calls over a number of releases. The experimental Async Hooks API was introduced in earlier versions as part of this work. One of the key use cases for Async Hooks was Async Local Storage (also referred to as Continuation Local Storage). There have been a number of npm modules that have provided APIs to address this need, however, over the years these have been tricky to maintain outside of Node.js core and the project reached a consensus that exploring having Node.js provide an API would make sense. The 14.x release brings an experimental Async Local storage API (which was also backported into 13.10) https://nodejs.org/api/async_hooks.html#async_hooks_class_asynclocalstorage. We are looking for the community to try out this API and give us feedback on abstraction model, API interface, use case coverage, functional stability, naming, documentation etc. so that we can work on getting it out of experimental in later releases. The best way to provide feedback is to open an issue in the diagnostics repository here (https://github.com/nodejs/diagnostics/issues) with a title along the lines of “Experience report with AsyncLocalStorage API”.

Streams

This release includes a number of changes marked as SemVer major in the Node.js Streams implementation. These changes are intended to improve consistency across the Streams APIs to remove ambiguity and streamline behaviors across the various parts of Node.js core. As an example, http.OutgoingMessage is similar to stream.Writable and net.Socket behaves exactly like stream.Duplex. A notable change is that the `autoDestroy` option is now defaulted to true, making the stream always call `_destroy` after ending. While we don’t believe these SemVer major changes will affect most applications, as they only change edge cases, if you rely heavily on Streams it would be good to test while Node.js 14 is the current release so that it is ready for when Node.js 14 becomes LTS in October 2020.

Experimental Web Assembly System Interface

Packages written in Web Assembly for Node.js bring the opportunity for better performance and cross-platform support for certain use cases. The 14.x release includes an experimental implementation of the Web Assembly System Interface (WASI) in order to help support these use cases. While not new to Node.js v 14, this is noteworthy as WASI has the potential to significantly simplify the native modules experience. You can read more about it in the API docs: https://nodejs.org/api/wasi.html.

Removal of Experimental Modules Warning

In Node.js 13 we removed the need to include the ` — experimental-modules` flag, but when running EcmaScript Modules in Node.js, this would still result in a warning `ExperimentalWarning: The ESM module loader is experimental.`

As of Node.js 14 there is no longer this warning when using ESM in Node.js. However, the ESM implementation in Node.js remains experimental. As per our stability index: “The feature is not subject to Semantic Versioning rules. Non-backward compatible changes or removal may occur in any future release.” Users should be cautious when using the feature in production environments.

Please keep in mind that the implementation of ESM in Node.js differs from the developer experience you might be familiar with. Most transpilation workflows support features such as optional file extensions or JSON modules that the Node.js ESM implementation does not support. It is highly likely that modules from transpiled environments will require a certain degree of refactoring to work in Node.js. It is worth mentioning that many of our design decisions were made with two primary goals. Spec compliance and Web Compatibility. It is our belief that the current implementation offers a future proof model to authoring ESM modules that paves the path to Universal JavaScript. Please read more in our documentation.

The ESM implementation in Node.js is still experimental but we do believe that we are getting very close to being able to call ESM in Node.js “stable”. Removing the warning is a huge step in that direction.

New compiler and platform minimums

Node.js provides pre-built binaries for a number of different platforms. For each major release, the minimum toolchains are assessed and raised where appropriate.

This release coincides with us moving all of our macOS binaries to be compiled on macOS 10.15 (Catalina) with Xcode 11 to support package notarization. As binaries are still being compiled to support the respective compile targets for the release lines, we do not anticipate this having a negative impact on Node.js users on older versions of macOS. For Node.js 14, we’ve bumped the minimum macOS target version to macOS 10.13 (High Sierra).

On our Linux based platforms, for Node.js 14 the minimum GCC level remains at GCC 6, however, we plan to build/release the binaries for some of the platforms with GCC 8.

Node.js 14 will also not run on End-of-Life Windows distributions.

Further details are available in the Node.js BUILDING.md.

Call to action

For the 6 months, while it is in the ‘current’ phase, Node.js 14 will receive the most new features that are contributed to Node.js. For the next 6 months, this release line is perfect for trying out the latest features, testing the compatibility of your project with the latest Node.js updates and giving us feedback so that the release is ready to transition to LTS in October.

To download, visit: https://nodejs.org/en/download/current/

Thank you!

We’d like to use this opportunity to say a big thank you to all the contributors and Node.js collaborators that made this release come together. We’d also like to thank the Node.js Build Working Group for ensuring we have the infrastructure to create and test releases and making the necessary upgrades to our toolchains for Node.js 14. 

Maintainers Should Consider Following Node.js’ Release Schedule

By Blog, Node.js

This blog was written by Benjamin Coe. Ben works on the open-source libraries yargs, nyc, and c8, and is a core collaborator on Node.js. He works on the client libraries team at Google. This piece originally appeared on the Node.js Collection. Node.js is an impact project of the OpenJS Foundation.

tldr; Node.js has a tried and true release schedule, supporting LTS versions for 30 months. It offers significant benefits to the community for library maintainers to follow this same schedule:

  • ensuring the ability to take security patches.
  • reducing the burden on maintainers.
  • allowing module authors to take advantage of new platform features sooner.

My opinion of what Node.js versions library maintainers should aim to support has evolved over the years. Let me explain why…

The JavaScript ecosystem in 2014

I joined npm, Inc in April 2014. During this period, releases of Node.js had stalled. Node.js v0.10.x was released in April 2013, and Node.js v0.12.x wouldn’t be released until February 2015.

At the same time, the npm package registry was going through growing pains (see: “Outage Postmortem”“Four hours of partial outage”, etc.).

The state of Node.js and npm in 2014 had side effects on how folks thought about writing libraries: maintainers didn’t need to put mental overhead into deciding what Node.js versions they supported (for years, the answer was 0.10.x); partially owing to npm’s instability, and partially owing to frontend communities not having fully embraced npm for distribution, package dependency trees were smaller.

Small building blocks, like mkdirp, still represented a significant portion of the registry in 2014.

Things would change in the intervening six years…

The JavaScript ecosystem today

In February of 2015, motivated by the io.js fork, The Node.js Foundation was announced. In September of that same year, Node.js v4.0.0 was released. Node.js v4.0.0 merged the io.js and Node.js projects unblocked the release logjam and introduced the 30-month LTS cycle advocated in this article.

Since Node.js v4.0.0, maintainers have been able to count on a regular cadence of releases, pulling in new JavaScript language features (through V8 updates), additions to the standard library (like HTTP/2), and important bug and security fixes.

In parallel, during the period between 2014 and today, npm significantly improved stability, and the frontend community began to consolidate on npm for distribution. The side effect was that many more packages were being published to npm (numbers grew from 50,000 in 2014, to 700,000 in 2018). At the same time, dependency trees grew (the average number of dependencies in 2016 was 35.3, the average number of dependencies in 2018 was 86).

A library maintainer in 2020 has at least three versions of Node.js to think about (the Current, Active, and Maintenance versions). Also, on average, their libraries rely on an increasing number of dependencies… it can be a bit daunting!

A library maintainer in 2020 has at least three versions of Node.js to think about (the Current, Active, and Maintenance versions).

A great way for maintainers to handle the increasing complexity of the JavaScript ecosystem is to consider adopting Node.js’ 30-month LTS schedule for their own libraries.

Here’s how adopting this schedule benefits both the library authors and the community…

Being able to take security patches

A security vulnerability was recently reported for the library minimistminimist is the direct dependency of 14,000 libraries… it’s a transitive dependency of the universe.

The popular templating library Handlebars was bitten by this report through an indirect dependency (optimist). Handlebars was put in a position where it was difficult to silence this security warning without disrupting its users:

  • optimist, deprecated several years earlier, was pinned to an unpatched version(~0.0.1) of minimist.
  • Handlebars itself supported Node.js v0.4.7, making it a breaking change to update to yargs (optimist’s pirate-themed successor).

Although motivated by good intentions (“why not support as many environments as possible?”), when libraries support end-of-life versions of Node.js, it can ultimately end in disruptions for users. Maintainers find themselves bumping a major version as a fire drill, rather than as a scheduled update.

Dropping support for old @nodejs release is a breaking change and it should be released in a major version.— Matteo Collina

The wide adoption of Node.js’ LTS schedule for modules ensures that security patches can always be taken.

Reducing the burden on maintainers

Keeping dependencies up to date is a lot of work (my team at Google landed 1483 pull requests updating dependencies last month), it’s also important:

  • the closer to a dependency’s release you catch an unintended breakage, the more likely it will be quickly fixed or rolled back.
  • keeping dependencies fresh helps ensure that critical vulnerabilities and bug fixes are rolled out to your own users (this avoids the Handlebars/minimist issue discussed).

Tools like Dependabot and Renovate make sure updating dependencies isn’t a maintainer’s full-time job. However, if libraries don’t adhere to the same version support policy, it makes automation difficult. As an example, because of falling behind the scheduled deprecation of Node.js v8.x.x, the library yargs turned off automatic updates for decamelize (opening itself up to all the risks that go along with this).

A lot of open-source is made possible by the volunteer work of maintainers. I can’t think of many things less exciting than the constant auditing of the SemVer ranges advertised in the “engines” fields of dependencies.

The wide adoption of Node.js’ LTS schedule for modules creates consistency and reduces the maintainer burden around updating dependencies.

Helping to evolve the platform

For the last couple of years, I’ve been involved in the Node.js Tooling Group. We’ve advocated for a variety of API improvements for tooling authors, such as recursive directory creationrecursive directory removal, and Source Map support for stack traces.

In Node.js v8.4.0http2 support was addedThis addition is near and dear to my heart since it allows Google’s client libraries (which rely on HTTP/2) to run natively on Node.js.

JavaScript itself is an evolving platform. Node.js regularly updates the V8 JavaScript engine, pulling in new language features, such as async iterators, async/await, and spread operators.

Keeping the Node.js core small will always be an architectural goal of the project. Node.js is, however, an evolving platform.

The wide adoption of Node.js’ LTS release schedule allows module authors to leverage exciting new features that are added to the platform.


What actions am I advocating that library maintainers take?

  1. When you release a new library, set the engines field to the oldest active LTS version of Node.js.
  2. When a Node.js version reaches end-of-life, even if your library is finished, bump a major version updating the engines field to the current oldest active LTS of Node.js.
  3. Consider throwing a helpful exception if your library is used on an unsupported Node.js version.
  4. Consider documenting your version support policy (here’s an example of the one we wrote for my team).

The Node.js Package Maintenance Working Group is developing further recommendations related to library support policies, such as Cloud Native’s Long Term Support for Node.js Modules. This policy takes this article’s recommendations a step further and suggests that module maintainers support a major library version for the lifetime of the Node.js runtime it was released under. This means that, if your library releases v1.0.0 with support for Node v10.x.x, you would continue to backport security and bug fixes to v1.0.0 until Node v10.x.x reaches end-of-life. Committing to this level of support can be a great move for libraries targeting enterprise users (and, they even have a badge!).

Tracking Node.js’ release schedule saves you time, makes libraries more secure, allows you to use Node.js’ fancy new features, and benefits the community as a whole. Let’s get good about doing this together.

You Told Us: OpenJS Node Certification helps you stand out

By Blog, Certification, Node.js

The Node.js industry is mature and there is more demand for Node skills than there are qualified developers. OpenJS Node certifications create new opportunities for developers, and are an excellent way to improve your resume, and more quickly move to projects and jobs that are higher-paying and more fulfilling.

We asked a group of developers who took at least one of the certifications in the past three months about their experiences. Two major themes stand out.

  1. Yes, money is important, but effectively testing your own skills is important, too
  2. A vendor-neutral certification is better and the OpenJS format really challenges you

Nikita Galkin, Independent Contractor, JSFest Program Committee Member, Software Engineer, System Architect, Node.js Tech Speaker, GraphQL Advocate, talked about standing out:

Remote work in the global world has high competition between developers. With a certificate, you are more likely to receive an invitation to the job interview.

I received an offer for an interesting remote project with a good salary at the start of this year. At the tech-interview end I was asked “what this certification is?” and “how was it complicated?”.

I do not think the certification was critical in deciding in my favour, but that was one of the things that made me different from other applicants.

Patrick Heneise, Software Consultant & CEO, Zentered.co, explained it was about testing his own skills:

I wanted to know, after almost 9 years of Node.js where I stand. Having a Certified Node.js badge on my social profile. It’s an easy sign for potential customers and clients that my knowledge has been tested. 

I wasn’t looking for a new job or more money, so I can’t tell if it helped. But definitely helped me to know my strengths and weaknesses, and I found out where I need to improve my own skills.

João Moura, Lead Technical Architect at Isobar Switzerland, likes how OpenJS certification tests differ from vendor-specific tests:

I think it’s a major benefit to have a certification on NodeJS. In the current days, NodeJS is becoming one of the major development infrastructures and I want to be part of that. The certification is one more step in that direction.

From the experience I have, the vendor-specific exams tend to have questions that are there just to show you how great their product is, for example:

“what can this product do?

A: something

B: another thing

C: awesome things

D: all of the above”

The answer is obviously D, and now you have a certificate on that product, congratulations :).

Since this is vendor-neutral, the exam is a lot more directed to see what the user does to solve a specific problem, there is no selling material, the person taking the exam needs to really understand the problem and solve it in a good and quick way. and that for me is a lot more entertaining :).

Justin Dennison, Edutainer at ITProTV, says that completing tasks, instead of answering questions, was closer to a real development environment:

I enjoyed the test-taking experience as it was the first exam that I had taken that was simulated and practical in nature. Instead of answering multiple-choice questions (or any of the other types), I thought that completing the tasks was more akin to my experience in a development environment. The testing was thorough for Node.js as a whole. I feel that vendor-neutral testing allows for an alternative perspective to testing as well as a means to gather community driven requirements.

I took the certification to validate my own understanding and learning. I had been developing using Node.js and teaching Node.js for several years. However, as always, there are times that you will question yourself, “Do I really know or understand what is going on?” Knowing that I was given tasks to complete and was able to complete those task using Node.js was a nice confirmation of my knowledge. 

Amir Elemam, Independent Contractor, liked the format of the test excellent, it resulted in more interesting relationships and projects at work:

The coding labs exam format was totally new for me. On the one hand it was harder because if I wasn’t pretty sure about something, to a point where I would know what to search for, there was no way to even try the question, but on the other hand, I was able to test the code I was writing, so in the end I had a very good sense about my performance, that’s good because the results don’t come out by end of the exam, as it happens with other certification exams I’ve taken.

The first thing was boosted my self- confidence, I no longer have any shred of doubt about my Node.js capabilities. Also, it improved how confident others were about my Node.js skills, which improved relationships and more challenges were given.

Find out more about the OpenJS certification programs, and sign up now!

The OpenJS Foundation turns one

By Blog

We’ve just crossed the one year anniversary since the Node.js and JS Foundations were merged last March to create the OpenJS Foundation. So many folks pulled together to bootstrap OpenJS and based on that thoughtful and thorough work, we were quickly set up as a positive place for future growth. We didn’t want to let this milestone pass without saying a big thank you to the JavaScript communities for making the first year as a Foundation absolutely wonderful!

We know it’s not an easy time for our global community. Feelings of isolation and scarcity are real, and we hope this community can be a safe place to learn, collaborate and grow. Engaging with you all online has made us feel more connected and helps brighten our days. As we reflect on the one year since forming, we wanted to send sincere thanks to everyone who made our first year so special.

Thank you for the warm and inviting welcome! My first year on the job has been a dream come true thanks to this amazing community. 

Thank you to our newest Incubating Projects Node Version Manager (nvm), Fastify, AMP and Electron. Your trust in the Foundation to be good stewards of your important technology is an honor that we do not take lightly. We look forward to being your partner and witnessing the incredible things to come. 

Thank you to ALL of our projects and the project maintainers, collaborators, and contributors. Our Foundation would not exist without you and the value you bring to our entire community. In the spirit of bringing everyone together, we were proud to partner with this community on two OpenJS Foundation Collab Summits, bringing all projects together. You can go to https://github.com/openjs-foundation/summit to get more info and see a list of past and future events. And, yes, like everyone around the globe, we are adjusting to the new normal and are looking to expand our abilities to run highly useful virtual events. Stay tuned!

Thank you to each member of the Cross Project Council and Board of Directors for their dedication and tireless effort in ensuring the best version of our Foundation continues to emerge. We have made incredible progress and know our future is bright because we have such strong, passionate and smart leaders at the helm. 

Thank you for the excitement and valuable feedback surrounding the Node.js Certifications. We launched the professional certification program to support the future of Node.js development in October. The two certification programs – OpenJS Node.js Application Developer (JSNAD) and OpenJS Node.js Services Developer (JSNSD) – are aimed at Node.js developers and are designed to demonstrate competence within the Node.js framework. The programs were developed in partnership with NearForm and NodeSource and are available immediately. Additionally, thank you to NearForm, NodeSource, and TrueAbilty for their partnership and expertise in getting these off the ground!

Thank you to our newest members Netflix, Skyscanner, and Vincit for joining the foundation and investing in the future of open-source JavaScript. Also, thank you to all of our members in their support for the OpenJS Foundation. If you are not already a member and are interested in learning more, we are happy to talk through the amazing benefits

What’s next
While no one has a crystal ball that can predict exactly what’s in store for us as a Foundation this coming year. But one thing is for sure, it will be just as exciting and fast-paced as the first year. We will continue to add new projects and members while growing our shared resource pool for the benefit of our community. Things like more cross-project support, ecosystem-wide focus on security and overall growth will certainly be part of our year. 

Get Involved!
As we head into year two, we are looking for lots of ways for new folks to get involved! We’d love to collaborate with you.

30% off Node.js Certifications through April 30th

By Announcement, Blog, Certification, Node.js

A Node.js Certification is a great way to showcase your abilities in the job market, and allow companies to find top developer talent — and now these exams are 30%. 

In October, the OpenJS Foundation announced the OpenJS Node.js Application Developer (JSNAD) and OpenJS Node.js Services Developer (JSNSD) certification programs, which are designed to demonstrate competence within the Node.js framework. 

Until April 30, 2020, these certification exams are 30% off the regular $300 per exam cost. Use coupon code ANYWHERE30 to save 30%.

You have up to a year to study and take the exam, yet given that many of our community must stick close to home due to global health concerns, we wanted to lighten the load. Our exams are proctored virtually and exam takers don’t have to travel to testing centers, and can take exams from the comfort and safety of their own homes or workplaces, reducing the time and stress required.

About the Exams
OpenJS Node.js Application Developer (JSNAD)
The OpenJS Node.js Application Developer certification is ideal for the Node.js developer with at least two years of experience working with Node.js. For more information and how to enroll: https://training.linuxfoundation.org/certification/jsnad/

OpenJS Node.js Services Developer (JSNSD)
The OpenJS Node.js Services Developer certification is for the Node.js developer with at least two years of experience creating RESTful servers and services with Node.js. For more information and how to enroll: https://training.linuxfoundation.org/certification/jsnsd/

Both exams are two hours long, performance-based exams delivered via a browser-based terminal and each includes an automatic free retake (if needed). Exams are monitored by a live human proctor and are conducted online in English. Certification is valid for three years and includes a PDF Certificate and a digital badge. Corporate pricing for groups of five or more is available.

Register today to become a Node.js certified developer.


AMA Recap from the Node.js Technical Steering Committee

By AMA, Blog, Node.js

Members of the Technical Steering Committee (TSC) for Node.js gave an informative AMA, which you can watch below. Speakers include Michael Dawson (@mhdawson1), Matteo Collina (@matteocollina), Gireesh Punathil (@gireeshpunam), Gabriel Schulhof (@gabrielschulhof), Bethany Griggs (@BethGriggs_), Colin Ihrig (@cjihrig), and Myles Borins (@MylesBorins).

Full video here

In this AMA, the TSC took questions from the live chat and gave insight into how they got involved. Questions ranged from whether Node.js is good for image processing to thoughts on Deno. The TSC focused on a mix of preexisting and user generated questions.

Beginning with suggestions on how to get involved with Node and ending on the same note, this AMA can inspire individuals to join Node.js.

Video by Section

Introductions (1:08)

How to Get Involved (4:48)

When To Update Your LTS? (13:45)

Is Node Good For Image Processing Applications? (34:45)

Upcoming 14RX (42:00)

What Do You Think About Deno? (44:20)

Yarn v2 Module (51:07)

Wrap Up (53:55)

Photo Credit: Myles Borins

Our next AMA will feature OpenJS Project NodeRED! Submit your questions for the NodeRED team here!

Node.js in a Kubernetes world

By Blog, Node.js, Node+JS Interactive, tutorial

7 basic tasks Node.js developers need to understand about Kubernetes

This post was written by Michael Dawson, OpenJS Foundation Board Member and Node.js community lead at IBM. This first appeared on IBM Developer.

Kubernetes is fast becoming the leader for deploying and managing production applications, including those written in Node.js. Kubernetes is complex, though, and learning the ins and outs of the technology can be difficult, even for a seasoned developer.

Kubernetes logo

Node.js application developers may not need to manage Kubernetes deployments in our day-to-day jobs or be experts in the technology, but we must consider Kubernetes when developing applications.

As a reminder, Docker and Kubernetes are the foundation of most modern clouds, including IBM CloudCloud native development refers to creating applications that you will deploy in Docker or Kubernetes, and cloud-native applications often conform to the tenants of a 12 Factor application.

Kubernetes logo

In this article, I try to answer the question: “As a Node.js devleoper, what do I need to know about Kubernetes?” I cover a number of key tasks you must complete when building your Node.js application in a cloud-native manner, including:

  • Building Docker images
  • Deploying containerized applications
  • Testing your applications
  • Health checking
  • Logging
  • Gathering metrics
  • Upgrading and maintaining containerized applications

Building Docker images

Deploying to Docker or Kubernetes requires that you build a Docker image. To do that, you typically start with an existing image and layer in the additional components that you need.

When building Node.js applications, the community provides a number of official Docker images that you can start with. These images add the Node.js binaries to an existing Linux distribution (Debian or Alpine) and offer a way to bundle your application into the image so that it runs when you start the Docker container.

Each time the project publishes a new release, it creates new Docker images. For example, when 12.13.1 was released, the new Docker image “node:12.13.1” was made available.

There are three variants of the Node.js images:

  1. Debian-based images with the core components needed to build and test Node.js applications.
  2. Slim images which are Debian-based images with only the minimal packages needed to run a Node.js application after it is already built.
  3. Alpine-based images for those who need the smallest container size.

There are a number of tags that you can use to specify the image you want. See Docker Hub for the full list.

When choosing your base image, keep in mind:

  • Alpine binaries are not built as part of the main Node.js releases. This means that they are built from source and there is more risk of regressions.
  • You should likely use both the standard Debian image along with the Slim image in a multistage build in order to achieve a smaller resulting image size. More on that in the next section

Build your image

Once you’ve chosen your base image, the next step is to actually build the image which bundles in your application. This is done through a Dockerfile. The simplest Dockerfile is as follows:

FROM node : 12.13.1
EXPOSE 3000
COPY server.js .
CMD node server.js

This Dockerfile simply copies your application code (server.js) into the official image with the node:12.13.1 binaries and indicates that server.js should be run when the image is started. Note that you did not have to install Node.js as it is already installed and in the default path as part of the node:12.13.1 base image.

To build an image from this file, put the file into a directory with your application code and then from that directory run docker build . -t test1:new. This builds a docker image named test1, and with the new tag, based on the contents of the current directory.

While this sounds straightforward, Dockerfiles can quickly become complicated because you want to build the image in a way that only includes the minimum components needed to run your application. This should exclude intermediate artifacts, for example the individual object files created when native addons are compiled. To exclude intermediate artifacts, a multistage build is commonly used where artifacts are generated in one step or image and then copied out to another image in a later stage. Read more about this in the Docker documentation.

Once you have built the image, you need to push it to a registry and then you can run and test your application under Docker or Kubernetes. It’s best to test in as close an environment to the production deployment as possible. Today, that often means Kubernetes.

Deploying your application to Kubernetes

In order to deploy and test your image in Kubernetes you use one or more deployment artifacts. At the lowest level, much of the Kubernetes configuration is through YAML. To deploy and access your application through a port, you need three YAML files.

To start, you need a deployment which specifies which image to use (the one you pushed to the registry earlier). For example:

apiVersion: apps/v1
kind: Deployment
metadata:
    name: test1
    labels: 
        app: test1
spec:
    replicas: 2
    selector:
        matchLabels:
            app: test1
    template:
        metadata:
            labels:
                app: test1
        spec:
            containers:
            - name: test1
               image: test1:new
               imagePullPolicy: Never
               args:

In this case, it’s specifying through the image: entry that it should use the Docker image named test1 with the new tag (which matches what we built in the earlier step). You would use the following command to deploy to Kubernetes: kubectl apply -f deployment.yaml.

Similarly, you’d need to deploy a service to provide a way to get to the running containers that make up the deployment and an ingress to allow external access to that service:

kubectl apply -f service.yaml

kubectl apply -f ingress.yaml

I won’t go into the details here, but the key is that you need to understand and write YAML along with understanding the YAML for deployments, services, and ingress.

If this sounds a bit cumbersome, it is, and you are not the first one to think so. Helm charts were developed to bring together a deployment, service, and ingress in a way to simplify deployments.

The basic structure of a Helm chart is as follows:

Package-name/
  charts/
  templates/
  chart.yaml
  LICENSE
  README.md
  Requirements.yaml
  Values.yaml

The base Helm chart has templates which you can configure to avoid having to write your own deployment, service, and ingress YAML files. If you have not already written those files, it is probably easier to use that configuration option. You can, however, just copy your files into the template directory and then deploy the application with helm install dirname, where dirname is the name of the directory in which the chart files are located.

Helm is billed as the “Package Mana ger for Helm” and does make it easier to deploy an application configured through multiple YAML files and possibly multiple components. There are, however, shortcomings, including concerns over security and relying on only static configuration versus being able to have programmatic control. Operators look to address these issues. Read more about them here: https://github.com/operator-framework.

Iterative testing

After you figure out how to deploy your application, you need to test your application. The development and test process is iterative, so you probably need to rebuild, deploy, test a number of times. Check out our article on test-driven development.

If you are testing in Docker, you need to build a new image, start a container, and then access the image during each iteration. If you are testing in Kubernetes, you have to delete pods, redeploy, and so forth. Even when using Helm charts, this will take a number of commands and time for things to spin up. The key challenge is that this all takes time and that can slow down your development process.

So far, we’ve talked about building and testing an image for your application, but not what’s in your application itself. I can’t talk about your specific application, but let’s look at what you should build into your application in order to support a cloud-native deployment.

Health checking your applications

The first thing you need to build into your application is support for liveness and readiness endpoints. Kubernetes has built-in functionality to check these endpoints:

  • Liveness – restart the container when the liveness endpoint does not respond indicating the application is alive.
  • Readiness – defer sending traffic until application is ready to accept traffic, or step sending traffic if application is no longer able to accept traffic.

Kubernetes supports three probe types.

  • Running a command (through shell in the container)
  • HTTP probe
  • TCP probe

As a Node.js developer, you will probably use an HTTP probe. A response code of >=200 and less than 400 means everything is okay.

You configure the probes through the deployment YAML. For example:

apiVersion: v1
kind: Pod
metadata:
     name: test1
     labels:
          apps: test1
spec:
    containers:
     - name: test1
        image: test1:latest
        imagePullPolicy: Never
        args:
        - /server
        livenessProbe:
          httpGet:
             path: /live
             port: 3000
           initialDelaySeconds: 5
           periodSeconds: 10
        readinessProbe:
           httpGet:
              path: /live
              port: 3000
            initialDelaySeconds: 1
            periodSeconds: 10

In this code listings, I added a liveness prob on port 3000 using the /live path.

Additionally:

  • initialDelaySeconds controls how quickly to start checking (in order to allow enough time for the application to start up).
  • periodSeconds controls how often to check.
  • timeoutSeconds is 1 by default, so if your probe can take longer than that you’ll have to set that value as well.
  • failureThreshold controls how many times the probe needs to fail before the container will restart. You want to ensure that you don’t get into a loop where the application continues to restart and never has long enough to become ready to serve the probes.

Similarly, I added a readinessProbe on port 3000 using the /ready path. To test this out, I created an application that only responds successfully to the readinessProbe after 10 seconds. If I query the pods after deploying the application the output is:

User10minikube1:~/test$ kubectl get all
NAME           READY       STATUS        RESTARTS        AGE
pod/test1      0/1         Running       0               6s

Which shows 0/1 under READY indicating that the application is not yet ready to accept traffic. After 10 seconds, it should show 1/1, and Kubernetes will start routing traffic to the pod. This is particularly important when we scale up or down as we don’t want to route traffic to a new container until it is ready or else it will slow responses even though another container is up and already capable of responding quickly.

The readiness and liveness probes may be as simple as:

app.get('/readiness', (req, res) => {

  res.status(200).send();

});

app.get('/liveness', (req, res) => {

  res.status(200).send();

});

However, you will often need to check the liveness and readiness of other components within your application. This can get complicated if you have a lot of subcomponents. If you have a lot of subcomponents to check, I recommend using a package like CloudNativeJS/cloud-health that allows you to register additional checks.

Logging

Node.js developers also need to know how to do logging in a cloud-native environment. In container development, writing logs out to disk does not generally make sense because of the extra steps needed to make the logs available outside the container—and they will be lost once the container is stopped.

Logging to standard out (stdout) is the Cloud native way, and structured logging (for example using JSON) is the current trend. One of my current favorite modules is pino which is a fast, structured logger and is easy to use.

Kubernetes has support for watching standard out, so it’s easy to get the log output, for example you can use kubectl logs -f <name of pod> where <name of pod> is the name of a specific pod as follows:

user1@minikube1:~/test$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
test1-7bbc4449b4-98w2j   1/1     Running   0          30s
test1-7bbc4449b4-zqx6f   1/1     Running   0          30s
user1@minikube1:~/test$ kubectl logs -f test1-7bbc4449b4-98w2j
{"level":30,"time":1584451728646,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451734486,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"ready","v":1}
{"level":30,"time":1584451738647,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451748643,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451758647,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451768643,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":50,"time":1584451768647,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"down","v":1}
{"level":30,"time":1584451778648,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}
{"level":30,"time":1584451788643,"pid":1,"hostname":"test1-7bbc4449b4-98w2j","msg":"/live","v":1}

Cloud deployment environments like the IBM Cloud also have more sophisticated options for collecting and aggregating logs across containers, including LogDNA or the elastic stack Elasticsearch, Logstash, and Kibana.

Gathering metrics

In a Kubernetes deployment your application is running in containers, there may be multiple copies of each container, and it is not necessarily easy to find or access those containers in order to gather information about how your application is running. For this reason, it is important to export the key metrics from your container that you need to understand and track the health of your application.

Prometheus is the defacto standard on this front. It defines a set of metrics that you should export and gives you the ability to add additional application-specific metrics that are important to your business.

As an example, the following shows an example of the metrics returned using one of the available Prometheus packages:

# HELP process_cpu_user_seconds_total Total user CPU time spent in seconds.
# TYPE process_cpu_user_seconds_total counter
process_cpu_user_seconds_total 1.7980069999999984 1585160007722

# HELP process_cpu_system_seconds_total Total system CPU time spent in seconds.
# TYPE process_cpu_system_seconds_total counter
process_cpu_system_seconds_total 0.931571000000001 1585160007722

# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 2.7295780000000067 1585160007722

# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1584451714

# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 17645568 1585160007723

# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 881635328 1585160007723

# HELP process_heap_bytes Process heap size in bytes.
# TYPE process_heap_bytes gauge
process_heap_bytes 90226688 1585160007723

# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 20 1585160007723

# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 201354

# HELP nodejs_eventloop_lag_seconds Lag of event loop in seconds.
# TYPE nodejs_eventloop_lag_seconds gauge
nodejs_eventloop_lag_seconds 0.000200848 1585160007723

# HELP nodejs_active_handles Number of active libuv handles grouped by handle type. Every handle type is C++ class name.
# TYPE nodejs_active_handles gauge
nodejs_active_handles{type="Socket"} 2 1585160007722
nodejs_active_handles{type="Server"} 1 1585160007722

# HELP nodejs_active_handles_total Total number of active handles.
# TYPE nodejs_active_handles_total gauge
nodejs_active_handles_total 3 1585160007722

# HELP nodejs_active_requests Number of active libuv requests grouped by request type. Every request type is C++ class name.
# TYPE nodejs_active_requests gauge
nodejs_active_requests{type="FSReqCallback"} 2

# HELP nodejs_active_requests_total Total number of active requests.
# TYPE nodejs_active_requests_total gauge
nodejs_active_requests_total 2 1585160007722

# HELP nodejs_heap_size_total_bytes Process heap size from node.js in bytes.
# TYPE nodejs_heap_size_total_bytes gauge
nodejs_heap_size_total_bytes 5971968 1585160007723

# HELP nodejs_heap_size_used_bytes Process heap size used from node.js in bytes.
# TYPE nodejs_heap_size_used_bytes gauge
nodejs_heap_size_used_bytes 4394216 1585160007723

# HELP nodejs_external_memory_bytes Nodejs external memory size in bytes.
# TYPE nodejs_external_memory_bytes gauge
nodejs_external_memory_bytes 3036410 1585160007723

# HELP nodejs_heap_space_size_total_bytes Process heap space size total from node.js in bytes.
# TYPE nodejs_heap_space_size_total_bytes gauge
nodejs_heap_space_size_total_bytes{space="read_only"} 262144 1585160007723
nodejs_heap_space_size_total_bytes{space="new"} 1048576 1585160007723
nodejs_heap_space_size_total_bytes{space="old"} 3256320 1585160007723
nodejs_heap_space_size_total_bytes{space="code"} 425984 1585160007723
nodejs_heap_space_size_total_bytes{space="map"} 528384 1585160007723
nodejs_heap_space_size_total_bytes{space="large_object"} 401408 1585160007723
nodejs_heap_space_size_total_bytes{space="code_large_object"} 49152 1585160007723
nodejs_heap_space_size_total_bytes{space="new_large_object"} 0 1585160007723

# HELP nodejs_heap_space_size_used_bytes Process heap space size used from node.js in bytes.
# TYPE nodejs_heap_space_size_used_bytes gauge
nodejs_heap_space_size_used_bytes{space="read_only"} 32296 1585160007723
nodejs_heap_space_size_used_bytes{space="new"} 419112 1585160007723
nodejs_heap_space_size_used_bytes{space="old"} 2852360 1585160007723
nodejs_heap_space_size_used_bytes{space="code"} 386368 1585160007723
nodejs_heap_space_size_used_bytes{space="map"} 308800 1585160007723
nodejs_heap_space_size_used_bytes{space="large_object"} 393272 1585160007723
nodejs_heap_space_size_used_bytes{space="code_large_object"} 3552 1585160007723
nodejs_heap_space_size_used_bytes{space="new_large_object"} 0 1585160007723

# HELP nodejs_heap_space_size_available_bytes Process heap space size available from node.js in bytes.
# TYPE nodejs_heap_space_size_available_bytes gauge
nodejs_heap_space_size_available_bytes{space="read_only"} 229576 1585160007723
nodejs_heap_space_size_available_bytes{space="new"} 628376 1585160007723
nodejs_heap_space_size_available_bytes{space="old"} 319880 1585160007723
nodejs_heap_space_size_available_bytes{space="code"} 3104 1585160007723
nodejs_heap_space_size_available_bytes{space="map"} 217168 1585160007723
nodejs_heap_space_size_available_bytes{space="large_object"} 0 1585160007723
nodejs_heap_space_size_available_bytes{space="code_large_object"} 0 1585160007723
nodejs_heap_space_size_available_bytes{space="new_large_object"} 1047488 1585160007723

# HELP nodejs_version_info Node.js version info.
# TYPE nodejs_version_info gauge
nodejs_version_info{version="v12.13.1",major="12",minor="13",patch="1"} 1

There are Node.js packages that can help you export the recommended set of metrics and the ones you need that are specific to your application. prom-client is my current favorite for integrating Prometheus metrics into a Node.js application.

Kubernetes also generates Prometheus metrics, and Kubernetes distributions or cloud platforms often make it easy to scrape your requested endpoints, aggregate them, and then graph the resulting data. Even if you install Kubernetes locally, there are Helm charts that make it easy to do this. The end result is that if you export Prometheus metrics from your application it will generally be quite easy for you or your operations teams to aggregate, chart, and alert on this data.

Maintaining and upgrading your applications

As with all application development, you have to think about maintaining and upgrading your application. Kubernetes helps you update your application deployment by rolling out updated containers, but it does not help you update the contents of the containers you built.

You have to have a plan for how to maintain and upgrade each component you use to build your container or that you add into your container. These range from docker images, Dockerfiles, live and readiness endpoints, logging, and metrics.

If you have X projects with Y containers, you can end up with a big number of container images that you must maintain. When you decide to update a dependency (for example the logger or Prometheus package etc.), you have to figure out which images need to be updated, what versions they are already using, and the like.

What happens if you move to a new team? You’ll need to figure out what their docker file look like, how the deployment artifacts are organized, what endpoint names are used for liveness and readiness, which logger they are using, and more just to get started.

I don’t know about you, but at this point I start to think “enough is enough” — I just want to focus on the application code —. Do I really need to think about Docker, Dockerfiles, YAML and all this stuff that’s outside of my application itself?

Separating concerns

What I’d really like is some separation of concerns and tooling to help me build/test efficiently using best practices agreed on by my organization. Something along these lines:

Kubernetes output

The two main components are:

  • a common stack agreed on by developers, architects, and operations which is re-used across projects within my organization
  • the application and additional packages that I need for the applications that I’m working on

For me, having consistency in the areas that don’t affect the “secret sauce” for the application I’m working on brings great benefits in ongoing maintenance and upgrades and even moving across teams. I look at this as Consistency where it makes sense as opposed to uniformity.

Tools to help with consistency

There are a few tools that can help isolate the tasks that you don’t need to worry about so that you can just focus on your code. I’ll introduce three new ones that I think can help you succeed.

Appsody

The Appsody open source project attempts to separate the operations aspects of application development to allow developers to focus on their application code, while at the same time building on a common Stack(s) that can be shared across an organization.

Appsody helps in the following ways:

  • Builds a base stack that you can layer applications on top of. This include most of the base components I’ve mentioned above which are needed for cloud-native deployment.
  • Accelerates the iterative local development and test cycle.
  • Simplifies creating Docker images using best practices and deploying them to Kubernetes.

There are Node.js-specific stacks in Appsody, including

  • nodejs – base stack for any Node.js application
  • nodejs-express – base stack with express built in
  • nodejs-loopback – base stack with loopback built in
  • nodejs-functions – base stack where you can write functions in the Function as a service(FaaS) style

I’m excited about Appsody offering reusable stacks that can be shared across teams, and that allows architects and operators to update the stack separately from the application itself. This is a newish approach (hence, the incubator in the path for the stacks), so I won’t be surprised if there are tweaks to the approach or tools as people experiment and get more familiar with it.

It’s great to see an open source code base starting to form and allow organizations to start proving out the approach. The existing stacks integrate liveness, readiness endpoints, and export a Prometheus endpoint. At the same time, the base Appsody framework also provides commands for building and deploying containers to Kubernetes or testing locally in Docker. The result is that much of what you need to know or understand as a Node.js developer for a coud-native deployment is supported by the tooling.

While I still think it is good for you to have a basic understanding of Kubernetes and cloud-native development, these new tools ensure you won’t need to be an expert on the ins and outs of the deployment YAML files, multistage Docker files, and the like.

Codewind

If you like what you see in Appsody but prefer a graphical user interface, you might want to check out the open source Codewind project. It adds a nice GUI, has Appsody integration, and allows you to bring your own visual editor (for example VS Code). It also adds in performance monitoring and Tekton pipelines. I’m more of a command line vi person, but if you’re interested you can check out it out here.

IBM Cloud Pack for Applications

Finally, if you are sold on this approach and want to build on a supported stack from IBM, check out the IBM Cloud Pack for Applications which includes the stack- based approach as well as support for Node.js.

Final thoughts

I hope you’ve gained a more concrete understanding of what cloud-native development and Kubernetes means to you as a Node.js developer and hope I’ve piqued your interest in stack-based development and Appsody. I talked on this subject at the most recent Node+JS Interactive conference and if you’d like to watch the recording, check out the slides and watch the recording.

Controlling Appium via raw HTTP requests with curl

By Appium, Blog, tutorial

This post originally appeared in Appium::Pro, Appium’s newsletter. Appium, a hosted Project at the OpenJS Foundation, is an open-source platform that enables automated testing of mobile and desktop apps on iOS, Android, Windows, Mac, and more.

Did you know that Appium is just a web server? That’s right! I’m going to write a full edition at some point on the WebDriver Protocol, which is the API that Appium implements (and which, as you can tell by the name, was invented for the purpose of browser automation). But for now, let’s revel in the knowledge that Appium is just like your backend web or API server at your company. In fact, you could host Appium on a server connected to the public internet, and give anybody the chance to run sessions on your devices! (Don’t do this, of course, unless you’re a cloud Appium vendor).

You may be used to writing Appium code in Java or some other language, that looks like this:

driver = new IOSDriver<WebElement>(new URL("http://localhost:4723/wd/hub"), capabilities);
WebElement el = driver.findElement(locator);
System.out.println(el.getText());
driver.quit();

This looks like Java code, but what’s happening under the hood is that each of these commands is actually triggering an HTTP request to the Appium server (that’s why we need to specify the location of the Appium server on the network, in the IOSDriver constructor). We could have written all the same code, for example, in Python:

driver = webdriver.Remote("http://localhost:4723/wd/hub", capabilities)
el = driver.find_element(locator)
print(el.text)
driver.quit()

In both of these cases, while the surface code looks different, the underlying HTTP requests sent to the Appium server (and the responses coming back) are the same! This is what allows Appium (and Selenium, where we stole this architecture from) to work in any programming language. All someone needs to do is code up a nice little client library for that language, that converts that language’s constructs to HTTP requests.

What all this means is that we technically don’t need a client library at all. It’s convenient to use one, absolutely. But sometimes, we want to just run an ad-hoc command against the Appium server, and creating a whole new code file and trying to remember all the appropriate syntax might be too much work. In this case, we can just use curl, which is a command line tool used for constructing HTTP requests and showing HTTP responses. Curl works on any platform, so you can download it for your environment if it’s not there already (it comes by default on Macs, for example). There are lots of options for using curl, and to use it successfully on your own, you should understand all the components of HTTP requests. But for now, let’s take a look at how we might encode the previous four commands, without any Appium client at all, just by using curl!

# 0. Check the Appium server is online
> curl http://localhost:4723/wd/hub/status

# response:
{"value":{"build":{"version":"1.17.0"}},"sessionId":null,"status":0}
# 1. Create a new session
> curl -H 'Content-type: application/json' \
-X POST \
http://localhost:4723/wd/hub/session \
-d '{"capabilities": {"alwaysMatch": {"platformName": "iOS", "platformVersion": "13.3", "browserName": "Safari", "deviceName": "iPhone 11"}}}'

# response:
{"value":{"capabilities":{"webStorageEnabled":false,"locationContextEnabled":false,"browserName":"Safari","platform":"MAC","javascriptEnabled":true,"databaseEnabled":false,"takesScreenshot":true,"networkConnectionEnabled":false,"platformName":"iOS","platformVersion":"13.3","deviceName":"iPhone 11","udid":"140472E9-8733-44FD-B8A1-CDCFF51BD071"},"sessionId":"ac3dbaf9-3b4e-43a2-9416-1a207cdf52da"}}

# save session id
> export sid="ac3dbaf9-3b4e-43a2-9416-1a207cdf52da"

Let’s break this one down line by line:

  1. Here we invoke the curl command, passing the -H flag in order to set an HTTP request header. The header we set is the Content-type header, with value application/json. This is so the Appium server knows we are sending a JSON string as the body of the request. Why do we need to send a body? Because we have to tell Appium what we want to automate (our “capabilities”)!
  2. -X POST tells curl we want to make a POST request. We’re making a POST request because the WebDriver spec defines the new session creation command in a way which expects a POST request.
  3. We need to include our URL, which in this case is the base URL of the Appium server, plus /session because that is the route defined for creating a new session.
  4. Finally, we need to include our capabilities. This is achieved by specifying a POST body with the -d flag. Then, we wrap up our capabilities as a JSON object inside of an alwaysMatch and a capabilities key.

Running this command, I see my simulator pop up and a session launch with Safari. (Did the session go away before you have time to do anything else? Then make sure you set the newCommandTimeout capability to 0). We also get a bunch of output like in the block above. This is the result of the new session command. The thing I care most about here is the sessionId value of ac3dbaf9-3b4e-43a2-9416-1a207cdf52da, because I will need this to make future requests! Remember that HTTP requests are stateless, so for us to keep sending automation commands to the correct device, we need to include the session ID for subsequent commands, so that Appium knows where to direct each command. To save it, I can just export it as the $sid shell variable.

Now, let’s find an element! There’s just one element in Appium’s little Safari welcome page, so we can find it by its tag name:

# 2. Find an element
> curl -H 'Content-type: application/json' \
-X POST http://localhost:4723/wd/hub/session/$sid/element \
-d '{"using": "tag name", "value": "h1"}'

# response:
{"value":{"element-6066-11e4-a52e-4f735466cecf":"5000","ELEMENT":"5000"}}

# save element id:
> export eid="5000"

In the curl command above, we’re making another POST request, but this time to /wd/hub/session/$sid/element. Note the use of the $sid variable here, so that we can target the running session. This route is the one we need to hit in order to find an element. When finding an element with Appium, two parameters are required: a locator strategy (in our case, “tag name”) and a selector (in our case, “h1”). The API is designed such that the locator strategy parameter is called using and the selector parameter is called value, so that is what we have to include in the JSON body.

The response we get back is itself a JSON object, whose value consists of two keys. The reason there are two keys here is a bit complicated, but what matters is that they each convey the same information, namely the ID of the element which was just found by our search (5000). Just like we did with the session ID, we can store the element ID for use in future commands. Speaking of future commands, let’s get the text of this element!

# 3. Get text of an element
> curl http://localhost:4723/wd/hub/session/$sid/element/$eid/text

# response:
{"value":"Let's browse!"}

This curl command is quite a bit simpler, because retrieving the text of an element is a GET command to the endpoint /session/$sid/element/$eid/text, and we don’t need any additional parameters. Notice how here we are using both the session ID and the element ID, so that Appium knows which session and which element we’re referring to (because, again, we might have multiple sessions running, or multiple elements that we’ve found in a particular session). The response value is the text of the element, which is exactly what we were hoping to find! Now all that’s left is to clean up our session:

# 4. Quit the session
> curl -X DELETE http://localhost:4723/wd/hub/session/$sid

# response:
{"value":null}

This last command can use all the default curl arguments except we need to specify that we are making a DELETE request, since that is what the WebDriver protocol requires for ending a session. We make this request to the endpoint /session/$sid, which includes our session ID so Appium knows which session to shut down.

That’s it! I hope you’ve enjoyed learning how to achieve some “low level” HTTP-based control over your Appium (and Selenium) servers!

Project News: WebdriverIO ships v6

By Announcement, Blog, Project Updates, WebdriverIO

Kudos to the WebdriverIO team for their recent v 6 release. Webdriver, a hosted project at the OpenJS Foundation, is a Next-gen browser automation test framework for Node.js

Big updates include:

Drop Node v8 Support
WebDriver has dropped support for Node v8, which was deprecated by the Node.js team at the start of 2020. It is not recommended to run any systems using that version anymore. It is strongly advised to switch to Node v12 which will be supported until April 2022.

Automation Protocol is now Default
Because of the great success of automation tools like Puppeteer and Cypress.io it became obvious that the WebDriver protocol in its current shape and form doesn’t meet the requirements of today’s developer and automation engineers. Members of the WebdriverIO project are part of the W3C Working Group that defines the WebDriver specification and they work together with browser vendors on solutions to improve the current state of the art. Thanks to folks from Microsoft there already proposals about a new bidirectional connection similar to other automation protocols like Chrome Devtools.

Performance Improvements
A big goal with the new release was to make WebdriverIO more performant and faster. Running tests on Puppeteer can already speed up local execution. Additionally, v6 has replaced the heavy dependency to request which has been fully depreciated as of February 11th 2020. With that, the bundle size of the webdriver and webdriverio package has been decreased by 4x.

These are only a few things that the v6 release brings. Read the full blog on the WebdriverIO site