Category

Project Update

LoopBack Joins OpenJS Foundation As New Incubating Project

By Announcement, Blog, OpenJS World, Project Update

LoopBack is the newest incubating project at the OpenJS Foundation

LoopBack is a popular Node.js framework for API creation and a platform to build large-scale Node.js applications using proven patterns with TypeScript and with support for SOAP and enterprise databases. Today, IBM announced it is contributing LoopBack to the OpenJS Foundation, trusting that the project will continue to grow and thrive with support from the community’s active core of developers. As it stands today, nearly half of all Loopback pull requests come from its community contributors outside of IBM.

“We are thrilled to welcome LoopBack into the OpenJS Foundation. As the vendor-neutral home to almost 40 open source projects, The OpenJS Foundation exists to sustain the JavaScript ecosystem on a global scale,” said Robin Ginn, OpenJS Foundation executive director. “We look forward to providing resources and support to LoopBack to help their community grow.”

“LoopBack joining as an incubating project is an important addition to the Foundation,” said IBM’s Joe Sepi, who is also the chairperson of the OpenJS Foundation Cross Project Council. “LoopBack is a great example of how interconnected JavaScript technologies can be and it’s always great to welcome new projects into the fold. On behalf of the OpenJS Foundation Cross Project Council, I am happy to welcome LoopBack to the foundation.”

LoopBack makes it easy to create a REST API with minimal coding. LoopBack provides a consistent way to design and implement APIs, including the REST layer, models, and ORM. These are all valuable benefits LoopBack presents as APIs enable businesses to expand the reach of their products and services to audiences of developers who consume those APIs.  Current users of LoopBack include GoDaddy, Symantec, IBM and others. 

Why Join OpenJS?

LoopBack was created in 2013 as the API economy and Node.js were taking flight. Since then, the open source project has reached a steady level of contributor diversity and product stability. Currently, LoopBack gets approximately 180K monthly downloads. Joining the OpenJS Foundation will help the open source project continue to grow in a vendor-neutral space with an open governance model. 

“We’re excited for the next chapter of LoopBack with the OpenJS Foundation as its new home,” said Raymond Feng, Co-founder and CTO at Abridged, Inc. “This is the thrilling moment that touches me professionally and personally as I have been developing, maintaining, and evangelizing the framework ever since I created LoopBack with Albert and Ritchie at StrongLoop in 2013.” Adds Feng, “ By betting Node.js as the great platform for API economy, we built LoopBack to help developers create APIs and Microservices in JavaScript/TypeScript that connect to databases, services, and infrastructure with minimal coding. I’m grateful that StrongLoop and IBM’s investment and sponsorship made it possible for LoopBack to continue to innovate and grow in past 8 years.”  Beyond the value of framework itself, LoopBack’s leadership has strived to build a diverse open source community and develop contributors and maintainers for the project. OpenJS Foundation is a natural next step for LoopBack as it truly reflects the project’s culture of collaboration and commitment to further grow the project and community under the open governance. 

Incubating projects under the OpenJS Foundation are projects that are in the process of completing their on-boarding checklist to join the foundation. There are currently more than 37 open source projects under the OpenJS Foundation umbrella.

Resources

The OpenJS Foundation provides a wide range of resources for organizations and individuals involved in the adoption and ongoing development of key JavaScript solutions and related technologies.

Project News: Node.js v 16 Available

By Announcement, Blog, Node.js, Project Update

The Node.js Project, a hosted project of the OpenJS Foundation, has announced the release of Node.js v 16. Highlights include the update of the V8 JavaScript engine to 9.0, prebuilt Apple Silicon binaries, and additional stable APIs.

You can download the latest release from https://nodejs.org/en/download/current/, or use Node Version Manager on UNIX to install with nvm install 16. The Node.js blog post containing the changelog is available at https://nodejs.org/en/blog/release/v16.0.0.

Initially, Node.js v 16 will replace Node.js 15 as our ‘Current’ release line. As per the release schedule, Node.js 16 will be the ‘Current’ release for the next 6 months and then promoted to Long-term Support (LTS) in October 2021. Once promoted to long-term support the release will be designated the codename ‘Gallium’.

As a reminder — Node.js 12 will remain in long-term support until April 2022, and Node.js 14 will remain in long-term support until April 2023. Node.js 10 will go End-of-Life at the end of this month (April 2021). More details on our release plan/schedule can be found in the Node.js Release Working Group repository.

A new major release is a sum of the efforts of all of the project contributors and Node.js collaborators! Congrats to all who made it possible!

Read the full blog with all the details on the Node.js blog.

WebdriverIO: OpenJS Foundation Live Q &A

By AMA, Blog, Project Update, WebdriverIO

WebdriverIO was created to allow users to automate any application written with modern web frameworks, as well as native mobile applications for Android and iOS. WebdriverIO is a Project hosted at the OpenJS Foundation

Members of the WebdriverIO team recently joined the OpenJS Foundation for a live Q and A on YouTube. This aimed to give insight into how WebdriverIO works as well as where it expects to go in the future. This was moderated by Christian Bromann, and included insights from Kevin Lamping, Erwin Heitzman, and Wim Selles. Users were able to ask questions via Twitter and live YouTube chat. 

Questions ranged from how users can participate in the WebdriverIO project with no technical background to best practices for storing credentials and variables. 

The full AMA is available here: OpenJS Foundation AMA – Webdriver

Timestamps

0:00 Brief Introduction

0:59 Moderator Introduction

6:30 Donation Announcement

14:00 Why doesn’t Web.io support JEST?

17:40 Can I contribute with no background knowledge?

22:45 Could you mock responses through selenium grid?

24:14  Web.io for mobile apps?

25:50 Udemy/Coursera

29:58 best way to categorize test suite

30:55 How to achieve a good test environment?

34:08 Best practice for variables 

35:28 Where to store credentials

36:00 Testing Accessibility with WDIO

41:26 Could we ever see WebdriverIO use AI?

43:30 Sync Mode

48:38 Layer Automation

50:50 Who owns WDIO?

54:00 Where do you see Webdriver in 5 years?

To learn more about Webdriver and how you can get involved, please visit their website here

Node-RED Version 1.3 Available Now!

By Blog, Node-RED, Project Update

Node-RED, the flow-based programming tool, has released version 1.3 as of April 2021. Node-RED is a growth project at the OpenJS Foundation.

Node-RED is a low code method of programming event driven applications. Flow-based programming creates networks that lend themselves to visual representation, making it a more accessible way of programming. JavaScript functions can be built using a rich text editor, and a built-in library allows access to useful functions, templates or flows for re-use.

Visualization of browser-based, flow-based programming creating networks

Node-RED was originally created in 2013 by members of IBM’s Emerging Technology Services group and has been in open source development since. It is one of the founding projects of the JS Foundation in 2016 and came into the OpenJS Foundation through the 2019 merger with the Node.js Foundation. 

Users of Node-RED include Hitachi, Veritone, Go-IoT, Handy.ai, and many more.

Notable changes in Node-RED 1.3 include relabelling of tabs, nesting references in Change/Switch nodes, and a new plugin framework for Node-RED. To make it easier for developers to use extra npm modules, users can now set their function nodes to be automatically run and defined in their code. It is also now possible to configure a Change or Switch node to nest references to message properties. The new configuration of Change nodes is cleaner and easier to read. 

The new plugin framework for Node-RED allows for easier customization and feature addition. This feature is still in its infancy, but will serve as the backbone for new iterations. Extra functions are implemented via plugin as opposed to code, keeping the core code smaller and allowing for users to be more selective over what “extra” features they want. For now, there are two types of new plugins available. Editor theme plugins which make installing and enabling new themes easier, and library source plugins which allow for configuring of additional libraries within the editor. 

To learn more about the 1.3 release, you can read about it on the Node-RED website here

Project News: NativeScript v8.0

By Blog, NativeScript, Project Update

New version signals growth and evolution

This week, the NativeScript, an incubation project at the OpenJS Foundation, shipped version 8. NativeScript is an open source community driven framework which empowers JavaScript developers with access to native platform APIs directly. This release will include some major upgrades including streamlined development with a JavaScript-focused stack and improved efficiency with iOS and Android development, which is especially timely given feature parity is of utmost importance. Additionally, v8.0 will make cross-platform development effective, practical, and fun. Read the project’s blog here.

“NativeScript brings together the convenience of web development with the capabilities and performance of the native mobile world,” said NativeScript Technical Steering Committee member, Stanimira Vlaeva.

What’s new?

Users can expect the following updates in the latest release: 

* Official Apple M1 support

* webpack5 support

* First class a11y support

* CSS box-shadow support (requested since 2015!)

* CSS text-shadow support

* New `hidden` binding property for more performance dialing cases

* New official eslint rules for NativeScript projects

* New `RootLayout` container – offering more dynamic creative view development

* New @nativescript/debug-ios package for deep view level investigations on your simulator or device

* New @nativescript/apple-pay plugin

* New @nativescript/google-pay plugin

* New website and revamped docs to better represent the current and future of NativeScript

* The first official NativeScript Best Practices Guide

* and more streamlining of core to further prepare for continual evolutionary enhancements

NativeScript 8.0 will bring some valuable benefits including 

  • Reducing the costs with multiple platform deliveries and enhance long term maintenance of TypeScript based tech stacks
  • The ability for managers to engage with the large JavaScript resource workforce
  • The ability to integrate with any popular frontend framework that teams would like to use

This new release signals solid footing for growth and natural modern JavaScript evolutions by addressing some of the oldest requested features. These include adding structural integrity with official eslint package, adding support for creative view development via new RootLayout, affirming broad use case applicability via new Capacitor integration, support for latest webpack5 and a revamped website and documentation refresh,  to name a few.

Get Involved!

Come join the fun! Git clone https://github.com/NativeScript/NativeScript and experiment with the source for any desire you may have. Get involved in public discussions surrounding NativeScript via the RFCs board: https://github.com/NativeScript/rfcs/discussions. Join Discord channel to be in touch: https://discord.gg/RgmpGky9GR

Project Update: nvm ships new version.

By Blog, nvm, Project Update

Today nvm released v0.38.0! This latest release includes new `nvm install` features, bug fixes, and updates to documentation.

Major updates include: 

  • Improvements to nvm install: OpenBSD source builds are now parallelized; nvm install -b will skip compiling from source
  • Bug fixes:
    •  nvm exec: ensure — stops argument parsing
    • fix variable issues on some shells; avoid conflicts with oh-my-zsh global variables
    • fix npm exec on older versions of npm 7
    • fix lts/-1 aliases being off-by-one
  • Lots of documentation improvements
  • Cloning the repo on windows should no longer fail due to test filenames

Check out the release notes: https://github.com/nvm-sh/nvm/releases/tag/v0.38.0

Project Update: jQuery 3.6.0 Released!

By Announcement, Blog, jQuery, Project Update
jQuery

Congrats to the jQuery team on their most recent release, version 3.6.0! jQuery is an Impact Project at the OpenJS Foundation.

The new release includes bug fixes and other improvements including:

Thank you to all of you who participated in this release by submitting patches, reporting bugs, or testing, including Dallas FraserMichal Golebiowski-OwczarekWonseop KimWonhyoung ParkBeatriz RezenerNatalia Sroka, and the whole team.

To read more about the new version and to download, visit the project’s blog.

Project News: Electron ships v12

By Blog, Electron, Project Update
Words Electron 12.0.0 is here with confetti surrounding it.

Electron, an impact project at the OpenJS Foundation, recently released an updated version, Electron 12.0.0. This new version includes upgrades to Chromium 89, V8 8.9 and Node.js 14.16. The team also added changes to the remote module, new defaults for contextIsolation, a new webFrameMain API, and general improvements. Full details of the new release can be found on the Electron blog.

Congrats to the Electron team!

Ajv Version 7, Big changes and improvements

By Announcement, Blog, Project Update

The following post was written by Evgeny Poberezkin, lead maintainer of Ajv (incubation project of the OpenJS Foundation). 

It’s been over a month since Ajv version 7 was released, and in this time many users have migrated to the new version. Ajv v7 is a complete rewrite that both changed the language to TypeScript and also changed the library design. I’m happy to share that it has been relatively smooth, without any major issues.

What’s new

I’ve written previously about what has changed in version 7, to summarize:

1. Support of the JSON Schema draft 2019-09 – users have been asking specifically for `unevaluatedProperties` keyword, which adds flexibility to the valuation scenarios, even if at a performance cost.

2. More secure code generation in case untrusted schemas are used. Execution of code that might be embedded in untrusted schemas is now prevented by design, on a compiler type system level (and you don’t need to use TypeScript to benefit from it, unless you are defining your own keywords).

3. Standalone validation code generation is now comprehensively supported, for all schemas.

4. Strict mode protecting users from common mistakes when writing JSON Schemas.

That is a big list of improvements that was possible thanks to Mozilla’s MOSS program grant.

Better for community

I’m also excited to share that Ajv v7  has grown contribution interest from its users, with some cases when independent users are interested to collaborate between them on some new features.

There are several reasons for that, I believe:

– the code is better organised and written on a higher level – it is easier to read and to change than before.

– documentation is now better structured with additional sections specifically for contributors – code components and code generation.

I am really looking forward to all the new ideas and features coming from Ajv users.

What’s changed and removed

These improvements came at a cost of a full library redesign, that requires being aware of these changes during migration:

– Importing from your code

– Installation

– Code generation performance

– Validation of JSON Schema formats

– Migrating from JSON Schema draft 4

Below these changes are covered in detail – they were causing migration difficulties to some users.

Importing Ajv from your code

To import Ajv in typescript you can still use default import:

 ```typescript
 import Ajv from "ajv"
 const ajv = new Ajv()
 ``` 

But to import in JavaScript you now need to use `default` property of the exported object:

 ```javascript
 const Ajv = require("ajv").default
 // or const {default: Ajv} = require("ajv")
 const ajv = new Ajv()
 ``` 

And if you use JavaScript modules you need to import Ajv this way:

 ```javascript
 // from .mjs file
 import Ajv from "ajv"
 const ajv = new Ajv.default()
 ``` 

This is a compromise approach that leads to a bit simpler compiled JavaScript code size of Ajv, and, what is more important, allows to export additional things alongside Ajv and does not force dependencies to use `esModuleInterop` setting of TypeScript. Possibly, there is a better way to export Ajv – please share any ideas to this issue.

Ajv installation

Several users, in particular those who use `yarn` rather than `npm`, had issues related to version conflicts between old and new versions. Because Ajv is a dependency of many JavaScript tools, the users can have both version 6 and version 7 installed at the same time.

When version 6 was released 2 years ago there were a lot of version conflicts. Since then npm seems to have improved – it handles multiple versions correctly when performing a clean installation – at least I have not seen any example that shows version conflicts in this scenario. But when performing incremental installations version conflicts still happened to a few users.

This situation should resolve itself as dependencies migrate, and in all cases clean installation resolved the problem.

Code generation performance

Validation code that Ajv v7 generates is at least as efficient as code generated by v6, and in many cases it is faster – version 7 introduced several tree optimisations and other improvements for it. The primary objective to re-design code generation was to improve its security when using untrusted schemas and to make the code more maintainable.

As a side effect, it also led to the reduction of Ajv bundle size.

The downside that may be affecting some users though is that the code generation itself is 4-5 times slower.

For most users it won’t have an impact on the application performance, as schema compilation should only happen once, when the application is started, or when the schema is used for the first time. But there are several scenarios when it can be important:

1. When using schemas in short lived environments when validation is performed only once or few times per compilation – it may include serverless environments, short-lived web pages, etc. In this case you should explore the possibility of using the standalone validation code to compile all your schemas at build time. Ajv v7 improved the stability of generating standalone validation code and it is now supported for all schemas.

2. When a schema is generated dynamically for each validation (or to perform a small number of validations). There is no solution for such scenarios – Ajv (and any other validator that compiles schemas to code) is simply not a good fit for such scenarios, if the performance is critical. The main advantage of schema compilation is that the produced validation code is much faster than it would have been to interpret the schema. But if the schema is dynamic, then there is no benefit to the compilation – a validator that interprets the schema in the process of validation could be a better fit. While it may be 50-100 times slower to validate, it may still be faster than compiling schema to code. You need to run your own benchmarks and decide what is better for your application.

3. When you used Ajv incorrectly and compiled schema for each validation. The correct usage is either to use the same Ajv instance to manage both schemas and compiled validation functions, or to manage (cache) them in your application code. In Ajv v7 this incorrect usage is more likely to be noticeable both because of slower compilation speed and also because Ajv caches the functions using schema itself as a key and not it’s serialised presentation.

To summarize, if you use Ajv correctly, as it is intended, it will be both safer and faster, but if you use(d) it incorrectly it may become slower.

Caching compiled schemas

Ajv compiles schemas to validation code that is very fast, but the compilation itself is costly, so it is important to reuse compiled validation functions.

There are 2 possible approaches:

1. Compile schemas either at start time or on demand, lazily, and manage how validation functions are re-used in your application code:

 ```javascript
 const schema = require("mu_schema.json")
 const validate = ajv.compile(schema)
 // ...
 // in this case schema compilation happens
 // when app is started, before any request is processed
 async function processRequest(req) {
   if (!validate(req.body)) throw Error("bad request")
   // ...
 }
 ``` 

It is important that `ajv.compile` is used outside of any API endpoints, as otherwise ajv may recompile the schema every time it is used (depending whether you pass the same schema reference or not).

2. Add all schemas to Ajv instance, using it as a cache of compiled validation functions, later retrieve them using either the schema `$id` attribute from the schema or the key passed to `addSchema` method.

File `./my_schema.json`:

 ```json
 {
   "$id": "https://example.com/schemas/my_schema",
   "type": "object",
   "properties": {
     "foo": {
       "type": "string"
     }
   }
 }
 ``` 

Code:

 ```javascript
 const schema = require("./my_schema.json")
 ajv.addSchema(schema, "my_schema")
 // ...
 // schema compilation happens on demand
 // but only the first time the schema is used
 async function processRequest(req) {
   const validate = ajv.getSchema("https://example.com/schemas/my_schema")
   // or
   // const validate = ajv.getSchema("my_schema")
   if (!validate(req.body)) throw Error("bad request")
   // ...
 }
 ``` 

If you are passing exactly the same (and not just deeply equal) schema object to ajv, ajv would use a cached validation function anyway, using schema object reference as a key.

But if you pass the new instance of the schema, even if the contents of the object is deeply equal, ajv would compile it again. In version 6 Ajv used a serialized schema as a cache key, and it partially protected from the incorrect usage of compiled validation functions, but it had both performance and memory costs. Some users had this problem (https://github.com/ajv-validator/ajv/issues/1413) when migrating to version 7.

Validation of JSON Schema formats

Format validation has always been a difficult area, as it is not possible to find an optimal balance between validation performance, correctness and security – these objectives are contradictory, and, depending on your application, you would need a different approach to validate the same format.

JSON Schema specification evolved to the point of declaring format validation as an optional, opt-in behaviour, and Ajv v7 made the same choice – formats are now released as a separate package ajv-formats.

Unlike JSON Schema specifies, Ajv does not just quietly ignore formats – it would have been error-prone – you have to explicitly configure it for the desired behaviour (or do not use formats in the schemas).

You have several options:

1. Fully disable format validation with the option `validateFormats: false`. In this case, even if you use formats in the schema, they will be ignored.

2. Define the list of formats that you want to be ignored by passing `true` values for some formats in `formats` option:

 ```javascript
 new Ajv({formats: {email: true}})
 ``` 

The configuration above would allow and ignore `email` format in your schemas, but would still throw an exception if any other format is used. This approach is more performant than passing regular expressions `/.*/` or function `() => true` because they would have to be executed, and in case of `true` no validation code is generated when this format is used.

3. Use ajv-formats package – it includes all formats previously shipped as part of Ajv, some of the formats have two options – more performant and more correct (`fast` and `full` – see the docs to ajv-formats).

4. Define your own functions (or use some a 3rd party library) to validate formats that suit your application – you can pass functions to ajv for each format you use and you can even use asynchronous validation if, for example, you want to validate the existence and/or configuration of domain name as part of `email` or `hostname` validation.

The last approach to validate formats – defining your own functions or using a library – is strongly recommended as it allows you to achieve the right balance between validation security, speed and correctness that fits your application.

JSON Schema draft 4 should be used with version 6

Draft 4 of the JSON Schema is the first version that Ajv supported, and since then there were several important changes in the specification that made supporting multiple versions of JSON Schema in the same code unnecessarily complex.

JSON Schema draft 2019-09 has introduced further complexity, so the support for draft 4 was removed.

You can either continue using JSON Schema draft 4 with Ajv version 6, or if you want to have all the advantages of using Ajv version 7 you need to migrate your schemas – it is very simple with ajv-cli command line utility.

What is next

Thanks for continuing sponsorship from Mozilla, many new improvements are coming in Ajv – the new major version 8 will be released in a few months.

The most exciting new feature that was just released in version 7.1.0 is the support for the alternative specification for JSON validation – JSON Type Definition – it was approved as RFC8927 in November 2020. This is a much simpler and more restrictive standard than JSON Schema, and it enforces better data design for JSON APIs, prevents user mistakes and maps well to type systems of all major languages. See Choosing schema language section in Ajv readme for a detailed comparison.

Ajv version 8 will bring many additional features and stability improvements and also will support the changes in the most recent JSON Schema draft 2020-12.

The second exciting change that is coming soon is a new website for Ajv – to make the documentation more accessible and discoverable, and to make contributions easier.

Thanks a lot for supporting Ajv!

Pointer Events Polyfill (PEP) enters emeritus status at the OpenJS Foundation

By Announcement, Blog, Project Update

The Pointer Events Polyfill (PEP), originally part of the jQuery project family, is fully deprecating after 8 years. Current project maintainer, Patrick H. Lauke (who also chairs the W3C Pointer Events Working Group) worked with contributors to push the final stable and secure release to npm in December 2020.

The OpenJS Foundation is honored to have been the neutral home for PEP and is grateful for those who have kept the project up and running over the years. 

PEP History and Milestones

PEP is an early example of open source experimentation and developer adoption driving web standards development.

Originally part of Google’s Polymer Project, PEP gave developers an early opportunity to experiment with the ideas introduced by Microsoft’s W3C member submission for a Pointer Events specification – providing websites and application with a more cohesive way to handle DOM events from a variety of input devices – such as touch, mouse, and stylus – rather than having to handle use separate event models (mouse events and touch events) in parallel.

PEP came to join the jQuery Foundation on December 17, 2014 in order to ensure that the polyfill was maintained in a sustainable and browser-agnostic way, and that tool developers could use it as a path to implementation in all browsers.

Active development of PEP continued through the initial standardisation process, which also saw jQuery members directly involved in the W3C working group, and that led to the stable Pointer Events 1 specification in 2015. PEP played an important role in the Pointer Events standardisation process, allowing an early test-bed for both spec implementers and developers in the wider web community to familiarise themselves with the new standard.

PEP eventually came to the OpenJS Foundation by way of the JS Foundation, the successor of the jQuery Foundation.

The Pointer Events specification has since grown and evolved – with Pointer Events Level 2 reaching recommendation status in April 2019, and current development on Pointer Events Level 3. Many of the functionalities introduced in these newer versions were, unfortunately, too fundamental to be easily “patchable” with a polyfill, which gradually slowed development on PEP – focusing mostly on security patches and bug fixes.

However, while PEP may now be deprecated, the future of Pointer Events themselves is looking good, with the native API now supported in the majority of current browsers (see caniuse.com/pointer). For this reason, unless a project specifically targets older browser versions, we would strongly encourage developers to stop including PEP and to instead rely solely on native Pointer Events.

Thank yous

Open source projects don’t run, or archive, themselves. There are people behind the GitHub repos that ensure things run smoothly. We’d like to thank all the contributors of the project (including Daniel Freedman from the Polymer Project, Scott González who represented the jQuery Foundation on the W3C working group and led the bulk of the development during that time, and Patrick H. Lauke who coordinated the final release) for maintaining PEP over the years and for giving back to the open source community.