For those wanting to start learning Node.js, the path has not always been clear. While there are many free resources and forums available to help, they require individual planning, research and organization which can make it difficult for some to learn these skills. That’s why The Linux Foundation and OpenJS Foundation have released a new, free, online training course, Introduction to Node.js. This course is designed for frontend or backend developers who would like to become more familiar with the fundamentals of Node.js and its most common use cases. Topics covered include how to rapidly build command line tools, mock RESTful JSON APIs and prototype real-time services. You will also discover and use various ecosystem and Node core libraries, and come away understanding common use cases for Node.js.
This blog was written by A.J. Roberts and the Node.js Mentorship Initiative team. This post was first published on the Node.js blog. Node.js is an impact project at the OpenJS Foundation.
The Node.js Mentorship Initiative is happy to announce our next opportunity. This one is open to developers with experience in C++. You will work hand-in-hand with the N-API working group members with the eventual goal of becoming a full-fledged member of the working group.
If you’re not familiar with the working group, we recommend checking out their recent blog post.
The N-API working group has the goal of making it easier to develop native addons for Node.js and other runtimes like Electron. They have already accomplished a lot of crucially important work for the Native Addons ecosystem. You can help them accomplish even more by improving test coverage, adding features to N-API, and creating examples for native plugin authors to follow.
The N-API working group will set aside time specifically for helping and guiding you, so it’s definitely worth applying through the Mentorship Initiative if you feel like this would benefit you. In order to do that, you should complete the application and its included challenge. The challenge is expected to take 2–4 hours to complete.
Please apply here by January 29th, 2021. We look forward to seeing your submissions.
Ryder’s low-code, screen scraping solution was an effective solution for a long time, yet, as their customers’ expectations evolved, they had an opportunity to upgrade.
To keep up with consumer demand, they implemented Profound Logic’s Node.js development products to create RyderView. Their new web-based solution helped transform usability for their customers and optimize internal business processes for an overall better experience.
Third-party freight carriers across North America rely on Ryder’s Last Mile legacy systems to successfully deliver packages. Constantly adding features the legacy system made for a a monolithic application that was no longer intuitive nor scalable.
The Ryder team, lead by Barnabus Muthu, IT & Application Architect, wanted to develop an intuitive web application that provided real-time access to critical information. Muthu wanted to balance the need for new development with his legacy programs’ extensive business logic.
Profound Logic’s Node.js development solutions were a great fit and allowed Muthu to expose his IBM i databases via API to push and pull data from external cloud systems in real-time. He was also able to drive efficiency on dev time by using npm packages. Using Node.js, Ryder was able to built a modern, web-based application that no longer relied on green screens, while leveraging his existing RPG business logic.
This new solution was named RyderView and it transformed usability for its customers, translating to faster onboarding and reduced training costs for Ryder.
For third-party users, it led to improved productivity as the entire time-consuming processes were made obsolete. Previously, Ryder’s third-party agents used paper-based templates to capture information while in the field. Now that Ryder’s new application used microservices to push and pull data from iDB2, end users were upgraded to a mobile application. These advancements benefited Ryder as well, allowing them to eliminate paperwork, printing costs, and the licensing of a document processing software.
Now is a great time to invest in yourself, or in your engineering team. Starting November 30 through December 8, the OpenJS Foundation, in partnership with the Linux Foundation, will be discounting all Node.js Certifications and Trainings. The OpenJS Certification and Training program serves to help developers in their professional development goals.
Certifications are excellent ways to validate your own development skills to yourself, employers, and the world.
OpenJS Node.js Application Developer (JSNAD) The OpenJS Node.js Application Developer certification is ideal for the Node.js developer with at least two years of experience working with Node.js. For more information and how to enroll: https://training.linuxfoundation.org/certification/jsnad/
OpenJS Node.js Services Developer (JSNSD) The OpenJS Node.js Services Developer certification is for the Node.js developer with at least two years of experience creating RESTful servers and services with Node.js. For more information and how to enroll: https://training.linuxfoundation.org/certification/jsnsd/
Feel confident in taking your exams with the Node.js Training courses. These courses help prepare developers for the Node.js certification exams.
Node.js Application Development (LFW211) This course provides core skills for effectively harnessing a broad range of Node.js capabilities at depth, equipping you with rigorous skills and knowledge to build any kind of Node.js application or library. While by design the training content covers everything but HTTP and web frameworks, the crucial fundamentals presented prepares the student to work with web applications along with all types of Node.js applications.
Node.js Services Development (LFW212) This course provides a deep dive into Node core HTTP clients and servers, web servers, RESTful services and web security essentials. With a major focus on Node.js services and security, this content is an essential counterpart to the Node.js Application Development (LFW211) course, and will prepare you for the OpenJS Node.js Services Developer (JSNSD) exam.
If you’d like to pursue Node.js Certifications and Trainings and this sounds like something you’d like to know more about, check out more information at this link.
N-API provides an ABI-stable API that can be used to develop native add-ons for Node.js, simplifying the task of building and supporting such add-ons across Node.js versions.
With downloads of node-addon-api surpassing 2.5 million per week, all LTS versions of Node.js supporting N-API version 3 or higher and node.js 15.x being released with support for N-API 7, it is a good time to take a look at the progress on simplifying native add-on development for Node.js.
When we started working on N-API back in 2016 (the original proposal is 12 Dec 2016) we knew it was going to be a long journey. There are many native packages in the ecosystem and we understood the transition would take quite some time.
The good news is that we have come a long way since the initial proposal. There has been a lot of work by the Node.js collaborators and the team focussed on N-API as well as package authors who have moved over. In that time, N-API has become the default recommendation for how to build native add-ons.
While the basic design has remained consistent (as planned), we’ve added incremental features in each new N-API version in order to address feedback from package authors as they adopted N-API and node-addon-api.
Having said that, let’s dive into some of the new features/functions that have been added over the last few years.
As people have been using N-API and node-addon-api we’ve been adding the key features that have been needed, including generally improving the add-on experience.
The changes fall into 3 main categories which are covered in the sections which follow.
Multi-Threaded and Asynchronous Programming
Performing computationally-intensive tasks on the main thread will block program execution, queuing events and callbacks in the event loop. As we gained experience with real-world use, in order to facilitate program integrity across multiple threads, both N-API and its wrapper node-addon-api were updated to provide several mechanisms to call into the Node.js thread from outside the main event loop, depending on use-case:
AsyncWorker: provides a mechanism to perform a one-shot action, and notify Node.js of its eventual completion or failure.
AsyncProgressWorker: similar to the above, adding the ability to provide progress updates for the asynchronous action.
Thread-safe functions: provides a mechanism to call into Node.js at any time from any number of threads.
Another recent Node.js development is the arrival of workers. These are full-fledged Node.js environments running in threads parallel to the Node.js main thread. This means that native add-ons can now be loaded and unloaded multiple times as the main process creates and destroys worker threads.
Since threads share the same memory space as the main process, multiple copies of a native add-on must now be able to co-exist in a single process. On the other hand, the library containing a native add-on is only loaded once per process. Thus, global data stored by a native add-on, which was so far stored in global variables, must no longer be stored in such a way, because global storage is not thread-safe.
Static data members of C++ classes are also stored in a thread-unsafe manner, so those must also be avoided. It’s also important to remember that the thread is not necessarily that which makes an add-on instance unique. Thus, thread-local storage of global variables should also be avoided.
In N-API version 6 we started providing a space for storing per-instance global data by introducing the concept of add-on instances, multiple of which can co-exist in a process, and by providing some tools for creating self-contained add-ons, such as
the NAPI_MODULE_INIT()macro, which will initialize an add-on in such a way that it can then be loaded multiple times during the life cycle of the Node.js process.
The node-addon-api Addon<T> class, which neatly combines the above tools to create a class whose instances represent instances of an add-on present in the various worker threads created by Node.js. Thus, add-on maintainers can store per-add-on-instance data as variables in an instance of the Addon<T> class and Node.js will create an instance of the Addon<T> class whenever it is needed on a new thread:
Additional helper methods
As package maintainers used N-API we discovered a few additional APIs that were commonly needed. These included:
Retrieving property names from objects
One of the other main areas where the N-API team worked to fill in gaps and make it easier for maintainers to consume N-API was the build workflow, including additions to CMake.js, node-pre-gyp and prebuild.
Historically, Node.js native addons have been built using node-gyp. For source code libraries that are already being built using CMake, the CMake.js build tool is an attractive alternative for building Node.js native add-ons. We have recently added an example of an add-on built using CMake.
Detailed information about using CMake.js with N-API add-ons can be found on the N-API Resource.
One of the realities of developing Node.js native add-ons is the fact that as part of installing the package using npm install the C or C++ code must be compiled and linked. This compilation step requires that a viable C/C++ toolchain be installed on the system doing the compilation. This can present a barrier to the adoption of native add-ons as the user of the add-on may not have the necessary tools installed. This can be addressed by creating prebuilt binaries that can be downloaded by the user of the native add-on.
A number of build tools can be used to create prebuilt binaries. node-pre-gyp builds binaries that are typically uploaded to AWS S3. prebuild is similar to node-pre-gyp but uploads the binaries to a GitHub release.
prebuildify is another option similar to the above that enables the native add-on developer to bundle the prebuilt binaries into the module uploaded to npm. The advantage of this approach is that the binaries are immediately available to the user when the package is downloaded. Although the downloaded npm package is larger in size, in practice the entire download process is faster for the user because secondary download requests to AWS S3 or a GitHub release are unnecessary.
Resources for getting started
One resource available to help get started is the node-addon-examples GitHub repository, containing samples of various Node.js native add-ons. The root of the repository contains folders for different functional aspects, from a simple Hello World add-on to a more complex multi-threaded add-on. Each example folder contains up to three subfolders: one for each Node.js add-on implementation (legacy NAN, N-API, and node-addon-api). To get started with the Hello World example using the node-addon-api implementation, simply run:
Another resource available is the The N-API Resource. This website contains information and additional in-depth walkthroughs regarding building Node.js add-ons and other advanced topics, such as:
tools needed to get started
migration guide from NAN
differences between build systems (node-gyp, cmake, …)
context-sensitivity and thread-safety
Closing out and call to action
The resulting C API is now a part of every Node.js distribution and a C++ convenience wrapper called node-addon-api is distributed as an external package through npm. N-API was launched with the promise to guarantee the API and the ABI compatibility across different major versions of Node.js and this has introduced a series of benefits:
It has removed the need to recompile modules when migrating to newer major versions of Node.js
Since N-API is a C API it is possible to implement native add-ons using languages other than C / C++ (such as Go or Rust).
When N-API has been released as an experimental API in Node.js v8.0.0 its adoption started to grow slowly, but many developers started to send feedback and contributions and this led us to add new features and to create new tools to better support all the native add-ons ecosystem.
Today N-API is widely used for the development of native add-ons. Some of the most used native add-ons have been ported to N-API:
We are constantly making progress on N-API and in general on the native add-ons ecosystem, but we always need more help. You could help us and the whole community to continue improving N-API in many ways:
Porting your own native module to use N-API
Porting a native module that your app depends on to N-API
The OpenJS Node.js Application Developer (JSNAD) and the OpenJS Node.js Services Developer (JSNSD) Exams (Node.js Certifications) will be updated from Node.js version 10, which is now in maintenance, to Node.js version 14, which is the most current LTS (Long Term Support) line. Changes will come into effect November 3, 2020. All tests taking place after 8:00 pm PT on June 16, 2020 will be based on Node.js version 14.
The updated exam will include the ability to use either native EcmaScript modules or CommonJS modules to answer questions, with CommonJS remaining the default and EcmaScript modules as an opt-in.
For example a given task on the examination may provide a folder containing an answer.js file and a package.json file. The package.json file does not contain a type field, as is the case when generating a package.json file with npm init. By default, the answer.js file is therefore considered a CommonJS module. So loading a module would be achieved like so:
const fs = require('fs')
To opt-in to native EcmaScript modules, candidates may either set the type field of the package.json file to module or may rename the answer.js file to answer.mjs. In either of those cases a module would be loaded like so:
import fs from 'fs'
Candidates may also explicitly opt-in to CommonJS by setting the type field to commonjs or by renaming the answer.js to answer.cjs but this is unnecessary as the absence of a type field means the answer.js file is interpreted as CommonJS anyway.
While there are no changes to the current set of Domains and Competencies for the JSNSAD and JSNAD Exams, candidates are advised to review functionality of libraries or frameworks on Node.js version 14. For a full list of differences between Node.js version 10 and Node.js version 14 see https://nodejs.medium.com/node-js-version-14-available-now-8170d384567e.
These exams are evergreen and soon after a Node.js version becomes the only LTS line the certifications are updated to stay in lockstep with that LTS version. Now that Node.js version 10 has moved into maintenance, certifications will be based on Node.js version 14.
The OpenJS Node.js Certification program was developed in partnership with NearForm and NodeSource. The certifications are a good way to showcase your abilities in the job market and allow companies to find top talent.
The new release includes:
N-API Version 7
Throw on unhandled rejections
Additional project news includes
Completion of the Node.js Contributors Survey to gather feedback on the contribution process to determine target areas for improvement.
big improvements to Node.js automation and tooling including the ability to kick off CI runs and land commits just by adding a GitHub label, making it easier for collaborators to manage the constant flow of Pull Requests.
The beginning of Next 10 Years of Node.js effort. The goal of this effort is to reflect on what led to success in the first 10 years of Node.js and set the direction for success in the next 10. One of the outcomes so far is that we’ve created a Technical Values document to guide our efforts.
To read more about Node.js v15, please read the blog here written by Bethany Griggs and the Node.js TSC.
Recently, Robin Ginn, OpenJS Foundation Executive Director and Joe Sepi, OpenJS CPC Chair, sat down with Amanda Blackburn of OpenJS Member Company, Profound Logic, to discuss the Foundation and Node.js. The following was posted originally on the Profound Logic blog.
Joe has been active in the Node.js project community, and the foundation, for a number of years and was part of the small group that merge the JS Foundation and the Node.js Foundation. He is now the chairperson for the cross-project council, the OpenJS Foundation’s top technical advisory committee.
Welcome to you both and thanks for joining us.
Thanks for having us.
Yeah thank you Amanda.
Just to jump right in, Robin, can you tell us about the history of the OpenJS Foundation?
I have been a part of the Node.js Foundation since it started almost 11 years ago. As you mentioned in your intro, the Node.js Foundation merged with the OpenJS Foundation to create a new home. We are new, but a lot of us have been at it for a very long time and we have lots of new friends that are joining. What we do is offer a neutral place for Open Source technology development and collaborate to happen. Having that neutral place is really important. If you are a company taking a big bet on a piece of Open Source software, you want to know that is being developed in a fair and open, and clear and transparent way.
We are super excited the Profound Logic is one of the members of the OpenJS Foundation. This membership helps us develop programs to support training and certification services, providing some IP support and giving people a place to development. We just hope that you all benefit from having greater connections to the community and take advantage of some marketing and though leadership opportunities as you are important leaders in the Open Source community.
Definitely. We have enjoyed being a part of that because it is such a vibrant community, especially on the Node side from our experience. It is so nice to be able to go out and see people so passionate about a language and technology, and to see what they are able to do with it.
Joe, you have been involved with the community side of this, can you tell us a little more information on what that has been like?
Sure! As you mentioned, the Node community is really passionate. Not just for the platform and the technology, but also for the community and the governance of the project. I have been a part of the Node.js community committee for a number of years, which focuses on the aspects outside of the core technical platform development, like the community part. We have taken a lot of that to the OpenJS Foundation and are working hard on building out a great community. We have been making a lot of progress on individual supporter program and generally just trying to engage with the community more.
Yeah it has very impressive to see the level of commitment and involvement in the community. This is something that we enjoy sharing with businesses that we talk to. There are always new ideas and new ways to support businesses on these languages. Today’s business have more tech options that ever before, but also just as many tech challenges.
A question I would as you both is: why is Open Source important to today’s businesses?
Let’s just look at Node.js and why it is important. I think we were having Hackathons probably 12-13 years ago. I love to credit the foundation model for keeping Node.js modern and trustworthy for businesses today. You mentioned Netflix, NASA is using Node.js in space suit solutions as their astronauts spend time on the space stations. I think most companies are using Open Source, but the Linux foundation just released a white paper on the importance of Open Source for business, particularly vertical businesses. They found that businesses contributing to Open Source as they move towards digital transformation and modernization helps them innovate much more quickly, 3xs faster.
I know for our customers, since we are in the legacy modernization space, we definitely try to get that message across. Open Source, including Node.js, can be really great way to help them address those challenges.
Yeah! I just pulled up the GitHub repository for Node, and there are over 2800 contributors. I don’t want to say that is free work, because you should always give back and support Open Source, but you are getting all of this people focusing on making the platform stable, secure and modern. It is like having a whole other team supporting the work and products that you are using.
That is what I love about GitHub. The support and feedback are instant and open for all. As you are building your own software solutions you have access to that developer feedback and documentation. 2800 developers are all working on it in real time.
That is something that we have even taken advantage of for our own products and services. For those that don’t know, npm[now a part of GitHub] is a great way to discover those applications and code to repurpose for any number of ways, even for business technologies.
Something that we have noticed in the AS/400 market is a lot our developers are getting older, and soon they will be retiring. A lot of the businesses we work with are still running applications on RPGLE or older application languages and will lose that mindshare when their developers leave. How can Node or other Open Source languages help bridge that gap, especially when looking for new developers?
I don’t think there has been a better time to be utilize open source technologies to modernize these legacy applications. In my experience, when moving from legacy applications to more modern approaches (like microservices for example) you can do it with a piecemeal-type of approach. Take certain applications and start to think about things in isolation so you can maintain and update them without effecting the larger application. The more you can separate those types of things the better.
Definitely. Robin, you might see this as well. One of the great challenges for businesses, not just legacy businesses, is the accumulation of technical debt and how to address that. I would imagine most businesses struggle with this. Could you speak a little about how Open Source languages might be able to help with that?
If you look at a combination Open source and open standard, what you are really doing is driving that inner operability. You can ease your transition to the cloud without having to rip and replace absolutely everything. You know that your systems will work better together. Node.js and other open source technology give you that flexibility to build modern apps and new solutions. You also mentioned the ability to attract new talent and developers. Before lockdown I was at a developer conference and I talked to some recruiters. They said one of the top categories they were hiring was Node.js developers. It definitely is the top of the developer talent pool together.
That is great to hear, and we are seeing the same.
Speaking of successfulness, which you mentioned before Robin with NASA trusting Node to keep them safe space, are their any other success stories with businesses using Node?
Oh gosh. I think we like to say Node is everywhere, and it really truly is once you start to talk to companies. We have actually been running some case studies on our blog if anyone is interested in taking a look. Companies like Netflix is using Node.js, Essry is doing some COVID tracking, as well as NearForm. There are a lot of really fascinating use cases.
But again, when you’re talking mission-critical, making sure your space suit doesn’t leak I We had written up this really cool case study, so we invited a NASA astronaut who benefitted from our technology, her name is Christina H. Koch. She spoke at OpenJS World, so you might want to check that out. She has a really fascinating story on how NASA is using technologies.
Yeah, that was a really great talk and I really enjoyed that. There is also a case study on the Weather Company. They have billions of locations, 60 languages and 230 location.
Yeah that is pretty amazing, and I would say that shows Node’s scalability.
Yeah that is something that we see as well. Robin you had mentioned rip and replace, and that is an option that we are always opposed too because we have enough experience to know it is not the fastest, easiest, or most thorough way. It is always a huge mess, and very expensive and risky. The other option we see is rewriting, something like Java or .NET, they run into similar limitations.
Node offers so much more flexibility, portability and stability that businesses can take advantage of. They can utilize that technology to help them do things like connect to the cloud or use AI. With things like npm you can just plug those right into your application, which is pretty cool.
Yeah and I think Node.js works in all the clouds. If you have a multi or single cloud strategy, it is going to work for you.
It is also, if I am not mistaken, the most utilized platform in everyone’s cloud. The other thing that is great about Node-It is great to use at the core of your applications, but if you have something that is resource-intense, you can spin out a worker thread or send that out to another service. Keeping Node at your core is a great option.
You have both touched on my next questions which is: What advice and best practices would you give businesses considering leveraging Node for their enterprise applications.
One thing to keep in mind at the onset is to be very cloud-native focused and cloud-ready focused with your Node.js developments. Think about how developing your Node applications will be integrating with Kubernetes and being able to surface your metrics, and things like that.
Yeah definitely. Most of our solutions here at Profound are based in Node. Doing things like offering options for systems integration, API, portability to the cloud, modernizing legacy code… Node is very flexible for all of these options.
Yeah and getting away from these monolithic applications and moving to a more microservice-oriented architecture is a good way to look at things too. And of course, serverless is great option if that is the right use case, some kind of event-driven architecture is very cost efficient and versatile.
That all sounds like really good advice. I know that businesses have a lot to think about when it comes to their technology and Node, and other Open Source languages, are mature, stable, secure and flexible enough to help businesses of all sizes and industries accomplish their goals.
I have one final question: How are you staying sane through quarantine? And have you developed any new hobbies?
I am staying sane by relying on old hobbies, I am a musician. I have been doing some socially distance and responsible band practices and working on a new record. [Check out some of Joe’s music!]
That is really cool. My big thing has always been exercise. That has always been my number one thing. OId hobby, I actually just bought a guitar and my son is teaching me to play. I have not played since I was a kid.
Wow well maybe you and Joe can collaborate on some musical projects.
Amazing. I am into it.
Jamming on our weekly calls.
I feel like if there is a silver lining to the quarantine at all it, it is definitely challenging the way we spend our time, and even work. Technology plays a part in that as well. It is definitely an interesting time, but that really cool you both have that in common.
Yeah and how about you Amanda? What are you doing?
I have actually gotten into Youtubing and creating videos on different topics that are my interests like Sci-Fi and video game stuff. I like that you get to interact with people who are interested in the same topics.
Well thanks so much for taking the time and being here today. It was really great to talk to you both and learn more about the foundation and all it has to offer.
Thanks Amanda and thanks to the Profound Logic team for hosting us.
Yeah thank you so much. Great to be here.
I wanted to take the time to direct everyone to OpenJSF.org to take advantage of all the foundation has to offer. That includes Open Source training and certification, collaboration with the community, and learning more about the projects and how to be a part of that.
Thanks for taking the time to join us, and we will talk to you next time!
As platforms grow, so do their needs. However, the core infrastructure is often not designed to handle these new challenges as it was optimized for a relatively simple task. Netflix, a member of the OpenJS Foundation, had to overcome this challenge as it evolved from a massive web streaming service to a content production platform. Guilherme Hermeto, Senior Platform Engineer at Netflix, spearheaded efforts to restructure the Netflix Node.js infrastructure to handle new functions while preserving the stability of the application. In his talk below, he walks through his work and provides resources and tips for developers encountering similar problems.
Check out the full presentation
Netflix initially used Node.js to enable high volume web streaming to over 182 million subscribers. Their three goals with this early infrastructure was to provide observability (metrics), debuggability (diagnostic tools) and availability (service registration). The result was the NodeQuark infrastructure. An application gateway authenticates and routes requests to the NodeQuark service, which then communicates with APIs and formats responses that are sent back to the client. With NodeQuark, Netflix also created a managed experience — teams could create custom API experiences for specific devices. This allows the Netflix app to run seamlessly on different devices.
However, Netflix wanted to move beyond web streaming and into content production. This posed several challenges to the NodeQuark infrastructure and the development team. Web streaming requires relatively few applications, but serves a huge user base. On the other hand, a content production platform houses a large number of applications that serve a limited userbase. Furthermore, a content production app must have multiple levels of security for employees, partners and users. An additional issue is that development for content production is ideally fast paced while platform releases are slow, iterative processes intended to ensure application stability. Grouping these two processes together seems difficult, but the alternative is to spend unnecessary time and effort building a completely separate infrastructure.
Hermeto decided that in order to solve Netflix’s problems, he would need to use self-contained modules. In other words, plugins! By transitioning to plugins, the Netflix team was able to separate the infrastructure’s functions while still retaining the ability to reuse code shared between web streaming and content production. Hermeto then took plugin architecture to the next step by creating application profiles. The application profile is simply a list of plugins required by an application. The profile reads in these specific plugins and then exports a loaded array. Therefore, the risk of a plugin built for content production breaking the streaming application was reduced. Additionally, by sectioning code out into smaller pieces, the Netflix team was able to remove moving parts from the core system, improving stability.
In the future, Hermeto wants to allow teams to create specific application profiles that they can give to customers. Additionally, Netflix may be able to switch from application versions to application profiles as the code breaks into smaller and smaller pieces.
To finish his talk, Hermeto gave his personal recommendations for open source projects that are useful for observability and debuggability. Essentially, a starting point for building out your own production-level application!
This blog is a call to action for package maintainers in order to help in one of our initiatives and to move it forward, we need your help.
It’s been almost 2 years and we’ve been working on a number of initiatives which you can learn more about through the issues in the package-maintenance repo. Things never move quite as fast as we’d like, but we are making progress on a number of different areas.
One area that we identified was:
Building and documenting guidance, tools and processes that businesses can use to identify packages on which they depend, and then to use this information to be able to build a business case that supports their organization and developers helping to maintain those packages.
We started by looking at how to close the gap between maintainers and consumers in terms of expectations. Mismatched expectations can often be a source of friction and by providing a way to communicate the level of support behind a package we believe we can:
help maintainers communicate the level of support they can/want to provide, which versions of Node.js they plan to support going forward, and the current level backing in place to help keep development of the package moving forward.
reduce potential friction due to mismatched expectations between module maintainers and consumers
help consumers better understand the packages they depend on so that they can improve their planning and manage risk.
In terms of managing risk we hope that by helping consumers better understand the key packages they depend on, it will encourage them to support these packages in one or more ways:
encouraging their employees to help with the ongoing maintenance
provide funding to the existing maintainers
support the Foundation the packages are part of (for example the OpenJS Foundation)
After discussion at one of the Node.js Collaborator Summits where there was good support for the concept, the team has worked to define some additional metadata in the package.json in order to allow maintainers to communicate this information.
The TL/DR version is that it allows the maintainer to communicate:
target: the platform versions that the package maintainer aims to support. This is different from then existing engines field in that expresses a higher level intent like current LTS version for which the specific versions can change over time.
response: how quickly the maintainer chooses to, or is able to, respond to issues and contacts for that level of support
backing: how the project is supported, and how consumers can help support the project.
We completed the specification a while ago, but before asking maintainers to start adding the support information we wanted to provide some tooling to help validate that the information added was complete and valid. We’ve just finished the version of that tool which is called support.
The tool currently offers two commands which can be used which are:
The show commands displays a simple tree of the packages for an application and the raw support information for those packages. Much more sophisticated commands will make sense to help consumers review/understand the support info but at this point it’s more important to start to have the information filled in, as that is needed before more sophisticated analysis makes sense.
The validate command helps maintainers validate that they’ve added/defined the support information correctly. If there are errors or omissions it will let the maintainer know so that as support information is added it is high quality and complete.
Our call to action is for package maintainers to:
Review the support specification and give us your feedback if you have suggestions/comments.
Add support info to your package
Use the support tool in order to validate the support info and give us feedback on the tool
Let us know when you’ve added support info so that we can keep track of how well we are doing in terms of the ecosystem supporting the initiative, as well as knowing which real-world packages we can use/reference when building additional functionality into the support tool.
We hope to see the ecosystem start to provide this information and look forward to seeing what tooling people (including the package-maintenance working group and others) come up with to help achieve the goals outlined.