Category

Project Update

Electron Update: Community Discord Server and Hacktoberfest

By Blog, Electron, Project Update

This blog was originally posted on the Electron blog. Electron is an Impact Project at the OpenJS Foundation.

Community Discord Server and Hacktoberfest

Join us for community bonding and a month-long celebration of open-source.

Hacktoberfest and Discord banner

Electron Community Discord Launch

Electron’s Outreach Working Group is excited to announce the launch of our official community Discord server!

Why a new Discord server?

In its early days as the backbone of the Atom text editor, community discussion on the Electron framework occurred in a single channel in Atom’s Slack workspace. As time passed and the two projects were increasingly decoupled, the relevance of the Atom workspace to the Electron project decreased, and maintainer participation in the Slack channel declined in the same manner.

Up until now, we had still been redirecting our broader community to the Atom Slack workspace, even though we’ve had many reports from folks who have had trouble receiving invitations, and few of our core maintainers were frequenting the channel.

We’re setting up this shiny new server to be a central discussion hub for the community where you can get the latest news on all things Electron.

Get in here!

So far, the server’s membership consists of a few maintainers who have been working together to set it up, but we’re so excited to chat with you all! Come ask for help, keep up to date with Electron releases, or just hang out with other developers. We’ve got a handy invite for you that’ll give you access to the server!

Hacktoberfest 2020

As a large and long-running open-source project, Electron wouldn’t have been nearly as successful without all the contributions from its community, from code submissions to bug reports to documentation changes, and much more. That’s why we believe in the importance of participating in Hacktoberfest to usher in a wider community of developers of all skill levels into the project.

Odds and ends

This year, we don’t have a wider project to give you all to work on, but we’d like to focus on opportunities to contribute across the Electron JavaScript ecosystem.

Look out for issues tagged hacktoberfest across our various repositories, including the main electron/electron repository, the electron/electronjs.org website, electron/fiddle, and electron-userland/electron-forge!

P.S. If you’re feeling particularly adventurous, we also have a backlog of issues marked with help wanted tags if you’re looking for more of a challenge.

Stuck? Come chat with us!

Moreover, it’s also no coincidence that the grand opening of our Discord server coincides with the largest celebration of open-source software of the year. Check out the #hacktoberfest channel to ask for help on your Hacktoberfest PR. In case you missed it, here’s the invite link again!

Have feedback on this post? Let @electronjs know on Twitter.

Need help or found a bug? Contact us.

From streaming to studio: The evolution of Node.js at Netflix

By Blog, Case Study, Node.js, Project Update

As platforms grow, so do their needs. However, the core infrastructure is often not designed to handle these new challenges as it was optimized for a relatively simple task. Netflix, a member of the OpenJS Foundation, had to overcome this challenge as it evolved from a massive web streaming service to a content production platform. Guilherme Hermeto, Senior Platform Engineer at Netflix, spearheaded efforts to restructure the Netflix Node.js infrastructure to handle new functions while preserving the stability of the application. In his talk below, he walks through his work and provides resources and tips for developers encountering similar problems.

Check out the full presentation 

Netflix initially used Node.js to enable high volume web streaming to over 182 million subscribers. Their three goals with this early infrastructure was to provide observability (metrics), debuggability (diagnostic tools) and availability (service registration). The result was the NodeQuark infrastructure. An application gateway authenticates and routes requests to the NodeQuark service, which then communicates with APIs and formats responses that are sent back to the client. With NodeQuark, Netflix also created a managed experience — teams could create custom API experiences for specific devices. This allows the Netflix app to run seamlessly on different devices. 

Beyond streaming

However, Netflix wanted to move beyond web streaming and into content production. This posed several challenges to the NodeQuark infrastructure and the development team. Web streaming requires relatively few applications, but serves a huge user base. On the other hand, a content production platform houses a large number of applications that serve a limited userbase. Furthermore, a content production app must have multiple levels of security for employees, partners and users. An additional issue is that development for content production is ideally fast paced while platform releases are slow, iterative processes intended to ensure application stability. Grouping these two processes together seems difficult, but the alternative is to spend unnecessary time and effort building a completely separate infrastructure. 

Hermeto decided that in order to solve Netflix’s problems, he would need to use self-contained modules. In other words, plugins! By transitioning to plugins, the Netflix team was able to separate the infrastructure’s functions while still retaining the ability to reuse code shared between web streaming and content production. Hermeto then took plugin architecture to the next step by creating application profiles. The application profile is simply a list of plugins required by an application. The profile reads in these specific plugins and then exports a loaded array. Therefore, the risk of a plugin built for content production breaking the streaming application was reduced. Additionally, by sectioning code out into smaller pieces, the Netflix team was able to remove moving parts from the core system, improving stability. 

Looking ahead

In the future, Hermeto wants to allow teams to create specific application profiles that they can give to customers. Additionally, Netflix may be able to switch from application versions to application profiles as the code breaks into smaller and smaller pieces. 

To finish his talk, Hermeto gave his personal recommendations for open source projects that are useful for observability and debuggability. Essentially, a starting point for building out your own production-level application!  

Personal recommendations for open source projects 

Metrics and alerts: 

Centralized Logging 

Distributed tracing 

Diagnostics 

Exception Management 

Node.js Package Maintenance: Bridging the gap between maintainers and consumers

By Blog, Node.js, Project Update

This blog was written by Michael Dawson with input from the Node.js package Maintenance Working Group. It was originally posted on the Node.js blog. Node.js is an OpenJS Foundation Impact Project.

Image for post

A while back I talked about the formation of the Node.js package maintenance Working Group and some of the initial steps that we had in mind in terms of helping to move the ecosystem forward. You can read up on that here if you’d like:
https://medium.com/@nodejs/call-to-action-accelerating-node-js-growth-e4862bee2919.

This blog is a call to action for package maintainers in order to help in one of our initiatives and to move it forward, we need your help.

It’s been almost 2 years and we’ve been working on a number of initiatives which you can learn more about through the issues in the package-maintenance repo. Things never move quite as fast as we’d like, but we are making progress on a number of different areas.

One area that we identified was:

Building and documenting guidance, tools and processes that businesses can use to identify packages on which they depend, and then to use this information to be able to build a business case that supports their organization and developers helping to maintain those packages.

We started by looking at how to close the gap between maintainers and consumers in terms of expectations. Mismatched expectations can often be a source of friction and by providing a way to communicate the level of support behind a package we believe we can:

  • help maintainers communicate the level of support they can/want to provide, which versions of Node.js they plan to support going forward, and the current level backing in place to help keep development of the package moving forward.
  • reduce potential friction due to mismatched expectations between module maintainers and consumers
  • help consumers better understand the packages they depend on so that they can improve their planning and manage risk.

In terms of managing risk we hope that by helping consumers better understand the key packages they depend on, it will encourage them to support these packages in one or more ways:

  • encouraging their employees to help with the ongoing maintenance
  • provide funding to the existing maintainers
  • support the Foundation the packages are part of (for example the OpenJS Foundation)

After discussion at one of the Node.js Collaborator Summits where there was good support for the concept, the team has worked to define some additional metadata in the package.json in order to allow maintainers to communicate this information.

The detailed specification for this data can be found in: https://github.com/nodejs/package-maintenance/blob/master/docs/PACKAGE-SUPPORT.md.

The TL/DR version is that it allows the maintainer to communicate:

  • target: the platform versions that the package maintainer aims to support. This is different from then existing engines field in that expresses a higher level intent like current LTS version for which the specific versions can change over time.
  • response: how quickly the maintainer chooses to, or is able to, respond to issues and contacts for that level of support
  • backing: how the project is supported, and how consumers can help support the project.

We completed the specification a while ago, but before asking maintainers to start adding the support information we wanted to provide some tooling to help validate that the information added was complete and valid. We’ve just finished the version of that tool which is called support.

The tool currently offers two commands which can be used which are:

  • show
  • validate

The show commands displays a simple tree of the packages for an application and the raw support information for those packages. Much more sophisticated commands will make sense to help consumers review/understand the support info but at this point it’s more important to start to have the information filled in, as that is needed before more sophisticated analysis makes sense.

The validate command helps maintainers validate that they’ve added/defined the support information correctly. If there are errors or omissions it will let the maintainer know so that as support information is added it is high quality and complete.

Our call to action is for package maintainers to:

  • Review the support specification and give us your feedback if you have suggestions/comments.
  • Add support info to your package
  • Use the support tool in order to validate the support info and give us feedback on the tool
  • Let us know when you’ve added support info so that we can keep track of how well we are doing in terms of the ecosystem supporting the initiative, as well as knowing which real-world packages we can use/reference when building additional functionality into the support tool.

We hope to see the ecosystem start to provide this information and look forward to seeing what tooling people (including the package-maintenance working group and others) come up with to help achieve the goals outlined.

messageformat is Working Hard to Make Themselves Obsolete

By Blog, In The News, messageformat, Project Update

This post originally appeared on DZone on September 1, 2020.

messageformat is an OpenJS Foundation project that handles both pluralization and gender in applications. It helps keep messages in human-friendly formats, and can be the basis for tone and accuracy that are critical for applications. Pluralization and gender are not a simple challenge, and deciding on which message format to implement can be pushed down the priority list as development teams make decisions on resources. However, this can lead to tougher transitions later on in the process with both technology and vendor lock-in playing a role. 

Quick note: The upstream spec is called ICU MessageFormat. ICU stands for International Components for Unicode: a set of portable libraries that are meant to make working with i18n easier for Java and C/C++ developers. If you’ve worked on a project with i18n/l10n, you may have used the ICU MessageFormat without knowing it. 

To find out more about messageformat, I spoke with Eemeli Aro, Software Developer at Vincit, and OpenJS Cross Project Council (CPC) member. Aro maintains the messageformat libraries, and actively participates in various efforts to improve JavaScript localization. Aro spoke on “The State of the Art in Localization” at last year’s Node+JS Interactive. Aro is an active participant in ECMA-402 processes, runs the monthly HelsinkiJS meetups, and helps organise React Finland conferences. 

How do formats deal with nuances in language? 

It’s all about choices. Variance, e.g. how the greeting used by a program could vary from one instance to the next, gets dealt with by having the messaging format that you’re using support the ability to have choices. So you can have some random number coming in and depending on the choice of that random number, you select one of a number of choices. This functionality isn’t directly built into ICU MessageFormat, but it’s very easily implementable in a way that gets you results. 

We need to decide how we deal with choices and whether you can have just a set number of different choice types. Is it a sort of generic function that you can define and then use? It’s an interesting question, but ICU MessageFormat doesn’t yet provide an easy, clear answer to that. But it provides a way of getting what you want. 

What are the biggest problems with messaging formats?

Perhaps the biggest problem is that while ICU MessageFormat is the closest we have to a standard, that doesn’t mean it is in standard use by everyone. There are a number of different other standards. There are various versions that are used by a number of tools and workflows and other processes in terms of localization. The biggest challenge is that when you have some kind of interface and you want to present some messages in that interface, there isn’t one clear solution that’s always the right one for you. 

And then it also becomes challenging because, for the most part, almost any solution that you end up with will solve most of the problems that you have. This is the scope in which it’s easy to get lock-in. Effectively, if you have a workflow that works with one standard or one set of tools or one format that you’re using, then you have some sort of limitation. Eventually, at some point, you will want to do something that your systems aren’t supporting. You can feel like it’s a big cost to change that system, and therefore you make do with what you have, and then you get a suboptimal workflow and a suboptimal result. Eventually, your interface and whole project may not work as well. 

It’s easy to look at messageformat and go, “That’s too complicated for us, let’s pick something simpler.” You end up being stuck with “that something simpler” for the duration of whatever it is that you’re working on. 

You’re forced to make a decision between two bad options. So the biggest challenge is it would be nice to have everyone agree that “this is the right thing to do” and do it from the start! (laughs) 

But of course that is never going to happen. When you start building an interface like that, you start with just having a JSON file with keys and messages. That will work for a long time, for a really great variety of interfaces, but it starts breaking at some point, and then you start fixing it, and then your fix has become your own custom bespoke localization system. 

Is technology lock-in a bigger problem than vendor lock-in? 

Technology lock-in is the largest challenge. Of course there is vendor lock-in as well, because there are plenty of companies offering their solutions and tools and systems for making all of this work and once you’ve started using them, you’re committed. Many of them use different standards than messageformat, their own custom ones. 

In the Unicode working group where I’m active, we are essentially talking about messageformat 2. How do we take the existing ICU MessageFormat specification and improve upon it? How do we make it easier to use? How do we make sure there’s better tooling around it? What sorts of features do we want to add or even remove from the language as we’re doing this work? 

messageformat, the library that I maintain and is an OpenJS project, is a JavaScript implementation of ICU MessageFormat. It tries to follow the specification as close as it can. 

Does using TypeScript help or hurt with localization? 

For the most part, it works pretty well. TypeScript brings in an interesting question of “How do you type these messages that you’re getting out of whatever system you’re using?” TypeScript itself doesn’t provide for plugins at the parser level, so you can’t define that. When there’s input in JavaScript, for example, for a specific file, then you can use specific tools for the different types that are coming out of it. Because messages aren’t usually one by one by one. You have messages in collections, so if you get one message out of a collection in JavaScript, you can make very safe assumptions about what the shape of that message is going to be. 

But of course in TypeScript you need to be much more clear about what the shape of that message is. And, if for whatever reason not everything is a string, then it gets complicated. 

It’s entirely manageable. You can use JavaScript tools for localization in a TypeScript environment, there are just these edge cases that could have better solutions than we currently have but work on those kind of requires some work on TypeScript behalf as well.

Should open source projects build their own solution for localization? 

I think this is one of those cases where it’s good to realize that this is JavaScript. If there’s a problem you can express briefly, you go look and you’ll find five competing solutions that are all valid in one way or another. Whatever your problem or issue is, it is highly likely that you will find someone else has already solved your problem for you, you just need to figure out how to adapt their solution to your exact problem. 

There are a number – like three or four – whole stacks of tooling for various environments for localization. And these are the sorts of things that you should be looking at, rather than writing your own. 

How is the OpenJS Foundation helping with localization?

Well, along with messageformat OpenJS hosts Globalize which utilizes the official Unicode CLDR JSON data. 

The greatest benefit that I or the messageformat project is currently getting from the OpenJS Foundation is that the Standards Working Group is quite active. And with their support, I’m actively participating in the Unicode Consortium working group I mentioned earlier where we are effectively developing the next version of the specification for messageformat. 

How far off is the next specification for messageformat?

It’s definitely a work in progress. We have regular monthly video calls and are making good progress otherwise. I would guess that we might get something in actual code maybe next year. But it may be actually longer than that for the messageformat to become standard and ready. 

How will localization be handled differently in 3-5 years? 

The messageformat working group didn’t start out under Unicode, it started out under ECMA-402. That whole work started from looking to see what we should do about adding support for messageformat to JavaScript. And this is one of the main expected benefits to come out of the Unicode messageformat working group. In the scope of 3-5 years, it is reasonable to assume that we are going to have something like intl.MessageFormat as a core component in JavaScript, which will be great! 

Effectively, this is also coming back to what the OpenJS Foundation is supporting. What I’m primarily trying to push with messageformat is to make the whole project obsolete! Right now we’re working on messageformat 3, which is a refactoring of some breaking changes. But hopefully a later version will be a polyfill for the actual Intl.MessageFormat functionality that will come out at some point. 

On a larger scale, it’s hard to predict how much non-textual interfaces are going to become a more active part of our lives. When you’re developing an application that uses an interface that isn’t textual, what considerations do you really need to bring in and how do you really design everything to work around that? When we’re talking about Google Assistant, Siri, Amazon Echo, their primary interface is really language, messages. Those need to be supported by some sort of backing structure. So can that be messageformat? 

Some of the people working on these systems are actively participating in the messageformat 2 specifications work. And through that, we are definitely keeping that within the scope of what we’re hoping to do. 

Try it out now

To install the core messageformat package, use:

npm install –save-dev messageformat@next

This includes the MessageFormat compiler and a runtime accessor class that provides a slightly nicer API for working with larger numbers of messages. More information: messageformat.github.io/messageformat/v3

Project Update: Official AMP Plugin for WordPress

By AMP, Announcement, Blog, Project Update

Success with WordPress,
powered by AMP

The missions of the WordPress and AMP open source projects are very well aligned. AMP, a growth project at the OpenJS Foundation, seeks to democratize performance and the building of great page experiences, which is at the core of WordPress’ goal of democratizing web publishing. 

Today the AMP team is very excited to release v2.0 of the Official AMP Plugin for WordPress! Lots of work went into this release, and it is loaded with many improvements and new capabilities in the areas of usability, performance, and flexibility. Read on to learn more, or check out the official AMP Blog for the full release notes.

AMP brings “performance-as-a-service” to WordPress, providing out-of-the-box solutions, a wide range of coding and performance best practices, always up-to-date access to the latest web platform capabilities, and effective control mechanisms (e.g. guard rails) to enable consistently good performance. AMP’s capabilities, and the guard rails it provides allow WordPress creators to take advantage of the openness and flexibility of WordPress while minimizing the amount of resources needed to be invested in developing and maintaining sites that perform consistently well. 

The Official AMP Plugin for WordPress is developed and maintained by AMP project contributors to bring the pillars of AMP content publishing at the fingertips of WordPress users, by:

  1. Automating as much as possible the process of generating AMP-valid markup, letting users follow workflows that are as close as possible to the standard workflows on WordPress they are used to.
  2. Providing effective validation tools to help dealing with AMP incompatibilities when they happen, including aspects of identifying errors, contextualizing them, and reporting them accurately.
  3. Providing support for AMP development to make it easier for WordPress developers to build AMP compatible ecosystem components, and build websites and solutions with AMP-compatibility built in.
  4. Supporting the serving of AMP pages on Origin, making it easier for site owners to take advantage of mobile redirection, AMP-to-AMP linking, minimization of AMP validation issues to surfaced in Search Console, and generation of optimized AMP pages by default.
  5. Providing turnkey solutions for segments of WordPress creators and publishers to be able to go from zero to AMP content generation in no time, regardless of technical expertise or availability of resources. 

To learn more about the AMP in WordPress, please check the release post on the official AMP Project Blog. If you haven’t tried it already, download the plugin today and get started on the path of consistently great performance for your WordPress site! And, if you are interested in becoming a contributor to the AMP Plugin for WordPress, you can engage with us in the AMP plugin github repository

Fastify: Graduation, performance and the future

By Blog, Fastify, Project Update

Fastify is moving from Incubation stage to a Growth Project! Within the OpenJS Foundation, this is a major step forward.

New projects at OpenJS start as incubation projects while maintainers complete the on-boarding checklist to join the Foundation. This includes documenting its infrastructure, transferring the IP, and adopting the OpenJS Code of Conduct. When a project graduates, they’ve readied their project for Foundation support. At OpenJS, we share best practices and reduce redundant administrative work across projects, such as non-technical governance, to help projects grow.

The Cross Project Council (CPC) centralizes coordination among projects as well as certain technical governance and moderation processes, and oversees the progression of projects between stages of their life cycles. Fastify has passed all of its requirements and we are happy to welcome them as a Growth Project!

To find out more about Fastify and what’s next, we talked with Matteo Collina, one of the Lead Maintainers of the Fastify team, Technical Director at NearForm, Node.js Technical Steering Committee member and OpenJS Foundation Cross Project Council member.

Are there any benchmarks that people should pay attention to regarding web performance?

I often say that “performance does not matter, until it absolutely does.” Most websites and applications do not need to be fast or scale to thousands of servers. Most developers at small and big companies alike will not (and should not) care about performance at all. Their bigger concerns are maintainability and speed of delivery. As a result, applications become bigger and slower.

As an example, a lot of developers care only about the latency of a single request when the server is idle, completely avoiding the latency and load introduced by server-side processing. Providing a snappy user experience requires both the front end and the backend to work in concert and play to each other strengths.

What are the most important metrics people should pay attention to with regard to web performance (faster networks, run time)?

The most important metric for Node.js applications is event loop latency. We define this as the time needed to process some part of an incoming http request. The higher the throughput of our application, the smaller this needs to be. Let’s make a quick example. Let’s imagine a Node.js server that can process an http request in 10 milliseconds of CPU time. Do you think this server is fast? Given that most deployments have 1 CPU per container (or even less), we can say that a single container can process around 100 requests per second.

However, we cannot say if our server is fast or slow, as it depends on the load. If our server will receive less than 100 req/sec it would appear snappy and “fast.” But if it’s over 100 req/sec, the service will now “lag behind” and the latency of every request will start increasing.

Fastify helps deploy Node.js applications at scale by applying some load-shedding techniques in the under-pressure plugin. Essentially if the server is busy it will start rejecting requests: the load balancer will try to serve them from another instance.

Now that Fastify has graduated incubation, what’s next for the project in terms of big milestones?

We’ll rest and recover! The last few months has been a race to ship Fastify v3, and now we are graduating!

It’s time to start planning Fastify v4 for 2021.

Ajv Joins OpenJS Foundation as an Incubation Project

By Announcement, Blog, Project Update

Today, Ajv, a JSON Schema validator for both server-side and client-side JavaScript applications, has entered into public incubation at the OpenJS Foundation. Ajv is a key part of the JavaScript ecosystem, used by a large number of web developers with millions of projects depending on it indirectly, via other libraries. 

In addition to becoming an incubating project, Ajv was recently awarded a grant from Mozilla’s Open Source Support (MOSS) program in the “Foundational Technology” track. This grant is continued validation for the important role Ajv plays within the JavaScript ecosystem and will help ensure this work continues. 

“A diverse set of widely used open source projects is why we exist and how our community continues to thrive,” said Robin Ginn, OpenJS Foundation Executive Director. “It’s great when these projects recognize the value of being part of the OpenJS community and benefit from what we are creating here. I’m thrilled to welcome Ajv as an incubation project to the OpenJS Foundation and excited to support its open development among web developers.”

Ajv is a leading JSON Schema validator that is highly specification compliant, supporting JSON Schema drafts 4 to 7. Ajv is also extensible via custom keywords and plugins, and is one of the fastest JSON Schema validators. Additionally, Ajv gets 120 million monthly downloads on npm. Many projects within the OpenJS Foundation use Ajv including webpack, ESlint, and Fastify.

“As CPC chair, I’m really happy that Ajv has become an incubating project at the OpenJS Foundation,” said Joe Sepi, OpenJS Foundation Cross Project Council Chair. “Ajv is an important project within the JavaScript open source space — many of our own projects already use it. This is an important step for Ajv and I, along with the entire CPC, am excited Ajv is taking this step with the OpenJS Foundation.”

“As Ajv’s CPC liaison, the person who helps guide potential projects through the application process, I’m excited for what’s to come for Ajv’s within the OpenJS Foundation,” said Dylan Schiemann, CEO at Living Spec and co-creator of Dojo. “As an incubating project, AJV has a unique opportunity to continue its path toward sustainability and growth. As a user of AJV and an early advocate for JSON Schema, we’re super excited to work with the project and support its growth as part of the OpenJS Foundation.”

“Ajv has become a centerpiece of all data-validation logic in my open-source projects and businesses. It’s spec-compliant, extensible, fast and has amazing support. Ajv joining the OpenJS Foundation will greatly benefit the entire JavaScript ecosystem,” said Gajus Kuizinas, CTO of Contra.

“I’ve been developing Ajv since 2015 and it is nice to see it being so widely used – it would never have happened without almost 100 contributors and a much larger number of users. Both the OpenJS Foundation and Mozilla grant will help Ajv become a permanent fixture in the JavaScript ecosystem – I am really looking forward to the next phase of Ajv development,” said Evgeny Poberezkin, the developer of Ajv.

By joining the OpenJS Foundation, there are multiple organizational and infrastructure areas that will be better supported. Furthermore, Ajv will be able to ensure governance and Code of Conduct enforcement to make sure that Ajv will continue to be stable. Joining will also help Ajv to grow and gain contributors, and potentially help with wider enterprise adoption through greater confidence and overall stability for the project.

As a collaborative project with transparency-first principles, the OpenJS Foundation is happy to welcome Ajv as an incubation project and looks forward to the many successes the project will have within its new home.

Start Contributing Now!

If you’d like to help build Ajv, you can start by looking at the Contributing Guidelines. Documentation, Issues, Bug Reports and lots more can be found here. Every contribution is appreciated! Please feel free to ask questions via Gitter chat.

Project News: Fastify ships v3

By Blog, Fastify, Project Update

Fastify, an incubation project at the OpenJS Foundation, has just released it’s latest version, Fastify v3. 

Fastify is the fast, open source Node.js web framework that focuses on low-performance overhead, an excellent developer experience, and a flexible plugin architecture.

Key updates include:

1. Better support for Typescript

2. Ability to embed Express (for reusing custom modules/libraries)

3. Simplified validation support with schema references

Read about all the details on the Fastify Blog, and hear it directly from Matteo Collina,  one of the Lead Maintainers of the Fastify team, Technical Director at NearForm, Node.js Technical Steering Committee member and OpenJS Foundation Cross Project Council member. 

Project News: Electron, releases a new version

By Announcement, Blog, Electron, Project Update

Congrats to the Electron team on their latest version release, Electron 9.0!

It includes upgrades to Chromium 83, V8 8.3, and Node.js 12.14. They’ve added several new API integrations for their spellchecker feature, enabled PDF viewer, and much more!

Read about all the details on the Electron blog here.

Learn more about Electron and why it has joined the Foundation as an incubation project.

Project News: What’s new in ESLint v7.0.0

By Blog, ESLint, Project Update

ESLint is an open source JavaScript linting utility and an At-Large project at the OpenJS Foundation. 

Congrats to the ESLint team on their most recent release, v7.0.0! This new release brings new updates including improved developer experience, core rule changes, new ESLint class, and much more. 

ESLint v7.0.0 Highlights:

Dropping support for Node.js v8

Node.js 8 reached EOL in December 2019, and we are officially dropping support for it in this release.

Core rule changes

  1. The ten Node.js/CommonJS rules in core have been deprecated and moved to the eslint-plugin-node plugin.
  2. Several rules have been updated to recognize bigint literals and warn on more cases by default.
  3. eslint:recommended has been updated with a few new rules: no-dupe-else-if, no-import-assign, and no-setter-return.

Improved developer experience

  1. The default ignore patterns have been updated. ESLint will no longer ignore .eslintrc.js and bower_components/* by default. Additionally, it will now ignore nested node_modules directories by default.
  2. ESLint will now lint files with extensions other than .js if they are explicitly defined in overrides[].files – no need to use the --ext flag!
  3. ESLint now supports descriptions in directive comments, so things like disable comments can now be clearly documented!
  4. Additional validation has been added to the RuleTester class to improve testing custom rules in plugins.
  5. ESLint will now resolve plugins relative to the entry configuration file. This means that shared configuration files that are located outside the project can now be colocated with the plugins they require.
  6. Starting in ESLint v7, configuration files and ignore files passed to ESLint using the –config path/to/a-config and –ignore-path path/to/a-ignore CLI flags, respectively, will resolve from the current working directory rather than the file location. This allows for users to utilize shared plugins without having to install them directly in their project.

New ESLint class

  1. The CLIEngine class provides a synchronous API that is blocking the implementation of features such as parallel linting, supporting ES modules in shareable configs/parsers/plugins/formatters, and adding the ability to visually display the progress of linting runs. The new ESLint class provides an asynchronous API that ESLint core will now using going forward. CLIEngine will remain in core for the foreseeable future but may be removed in a future major version.

Check out the release notes for all updates here and the migration guide here.

For more information on ESLint and how to get involved go to https://eslint.org/.