Category

Project Update

Project News: NativeScript v8.0

By Blog, NativeScript, Project Update

New version signals growth and evolution

This week, the NativeScript, an incubation project at the OpenJS Foundation, shipped version 8. NativeScript is an open source community driven framework which empowers JavaScript developers with access to native platform APIs directly. This release will include some major upgrades including streamlined development with a JavaScript-focused stack and improved efficiency with iOS and Android development, which is especially timely given feature parity is of utmost importance. Additionally, v8.0 will make cross-platform development effective, practical, and fun. Read the project’s blog here.

“NativeScript brings together the convenience of web development with the capabilities and performance of the native mobile world,” said NativeScript Technical Steering Committee member, Stanimira Vlaeva.

What’s new?

Users can expect the following updates in the latest release: 

* Official Apple M1 support

* webpack5 support

* First class a11y support

* CSS box-shadow support (requested since 2015!)

* CSS text-shadow support

* New `hidden` binding property for more performance dialing cases

* New official eslint rules for NativeScript projects

* New `RootLayout` container – offering more dynamic creative view development

* New @nativescript/debug-ios package for deep view level investigations on your simulator or device

* New @nativescript/apple-pay plugin

* New @nativescript/google-pay plugin

* New website and revamped docs to better represent the current and future of NativeScript

* The first official NativeScript Best Practices Guide

* and more streamlining of core to further prepare for continual evolutionary enhancements

NativeScript 8.0 will bring some valuable benefits including 

  • Reducing the costs with multiple platform deliveries and enhance long term maintenance of TypeScript based tech stacks
  • The ability for managers to engage with the large JavaScript resource workforce
  • The ability to integrate with any popular frontend framework that teams would like to use

This new release signals solid footing for growth and natural modern JavaScript evolutions by addressing some of the oldest requested features. These include adding structural integrity with official eslint package, adding support for creative view development via new RootLayout, affirming broad use case applicability via new Capacitor integration, support for latest webpack5 and a revamped website and documentation refresh,  to name a few.

Get Involved!

Come join the fun! Git clone https://github.com/NativeScript/NativeScript and experiment with the source for any desire you may have. Get involved in public discussions surrounding NativeScript via the RFCs board: https://github.com/NativeScript/rfcs/discussions. Join Discord channel to be in touch: https://discord.gg/RgmpGky9GR

Project Update: nvm ships new version.

By Blog, nvm, Project Update

Today nvm released v0.38.0! This latest release includes new `nvm install` features, bug fixes, and updates to documentation.

Major updates include: 

  • Improvements to nvm install: OpenBSD source builds are now parallelized; nvm install -b will skip compiling from source
  • Bug fixes:
    •  nvm exec: ensure — stops argument parsing
    • fix variable issues on some shells; avoid conflicts with oh-my-zsh global variables
    • fix npm exec on older versions of npm 7
    • fix lts/-1 aliases being off-by-one
  • Lots of documentation improvements
  • Cloning the repo on windows should no longer fail due to test filenames

Check out the release notes: https://github.com/nvm-sh/nvm/releases/tag/v0.38.0

Project Update: jQuery 3.6.0 Released!

By Announcement, Blog, jQuery, Project Update
jQuery

Congrats to the jQuery team on their most recent release, version 3.6.0! jQuery is an Impact Project at the OpenJS Foundation.

The new release includes bug fixes and other improvements including:

Thank you to all of you who participated in this release by submitting patches, reporting bugs, or testing, including Dallas FraserMichal Golebiowski-OwczarekWonseop KimWonhyoung ParkBeatriz RezenerNatalia Sroka, and the whole team.

To read more about the new version and to download, visit the project’s blog.

Project News: Electron ships v12

By Blog, Electron, Project Update
Words Electron 12.0.0 is here with confetti surrounding it.

Electron, an impact project at the OpenJS Foundation, recently released an updated version, Electron 12.0.0. This new version includes upgrades to Chromium 89, V8 8.9 and Node.js 14.16. The team also added changes to the remote module, new defaults for contextIsolation, a new webFrameMain API, and general improvements. Full details of the new release can be found on the Electron blog.

Congrats to the Electron team!

Ajv Version 7, Big changes and improvements

By Announcement, Blog, Project Update

The following post was written by Evgeny Poberezkin, lead maintainer of Ajv (incubation project of the OpenJS Foundation). 

It’s been over a month since Ajv version 7 was released, and in this time many users have migrated to the new version. Ajv v7 is a complete rewrite that both changed the language to TypeScript and also changed the library design. I’m happy to share that it has been relatively smooth, without any major issues.

What’s new

I’ve written previously about what has changed in version 7, to summarize:

1. Support of the JSON Schema draft 2019-09 – users have been asking specifically for `unevaluatedProperties` keyword, which adds flexibility to the valuation scenarios, even if at a performance cost.

2. More secure code generation in case untrusted schemas are used. Execution of code that might be embedded in untrusted schemas is now prevented by design, on a compiler type system level (and you don’t need to use TypeScript to benefit from it, unless you are defining your own keywords).

3. Standalone validation code generation is now comprehensively supported, for all schemas.

4. Strict mode protecting users from common mistakes when writing JSON Schemas.

That is a big list of improvements that was possible thanks to Mozilla’s MOSS program grant.

Better for community

I’m also excited to share that Ajv v7  has grown contribution interest from its users, with some cases when independent users are interested to collaborate between them on some new features.

There are several reasons for that, I believe:

– the code is better organised and written on a higher level – it is easier to read and to change than before.

– documentation is now better structured with additional sections specifically for contributors – code components and code generation.

I am really looking forward to all the new ideas and features coming from Ajv users.

What’s changed and removed

These improvements came at a cost of a full library redesign, that requires being aware of these changes during migration:

– Importing from your code

– Installation

– Code generation performance

– Validation of JSON Schema formats

– Migrating from JSON Schema draft 4

Below these changes are covered in detail – they were causing migration difficulties to some users.

Importing Ajv from your code

To import Ajv in typescript you can still use default import:

 ```typescript
 import Ajv from "ajv"
 const ajv = new Ajv()
 ``` 

But to import in JavaScript you now need to use `default` property of the exported object:

 ```javascript
 const Ajv = require("ajv").default
 // or const {default: Ajv} = require("ajv")
 const ajv = new Ajv()
 ``` 

And if you use JavaScript modules you need to import Ajv this way:

 ```javascript
 // from .mjs file
 import Ajv from "ajv"
 const ajv = new Ajv.default()
 ``` 

This is a compromise approach that leads to a bit simpler compiled JavaScript code size of Ajv, and, what is more important, allows to export additional things alongside Ajv and does not force dependencies to use `esModuleInterop` setting of TypeScript. Possibly, there is a better way to export Ajv – please share any ideas to this issue.

Ajv installation

Several users, in particular those who use `yarn` rather than `npm`, had issues related to version conflicts between old and new versions. Because Ajv is a dependency of many JavaScript tools, the users can have both version 6 and version 7 installed at the same time.

When version 6 was released 2 years ago there were a lot of version conflicts. Since then npm seems to have improved – it handles multiple versions correctly when performing a clean installation – at least I have not seen any example that shows version conflicts in this scenario. But when performing incremental installations version conflicts still happened to a few users.

This situation should resolve itself as dependencies migrate, and in all cases clean installation resolved the problem.

Code generation performance

Validation code that Ajv v7 generates is at least as efficient as code generated by v6, and in many cases it is faster – version 7 introduced several tree optimisations and other improvements for it. The primary objective to re-design code generation was to improve its security when using untrusted schemas and to make the code more maintainable.

As a side effect, it also led to the reduction of Ajv bundle size.

The downside that may be affecting some users though is that the code generation itself is 4-5 times slower.

For most users it won’t have an impact on the application performance, as schema compilation should only happen once, when the application is started, or when the schema is used for the first time. But there are several scenarios when it can be important:

1. When using schemas in short lived environments when validation is performed only once or few times per compilation – it may include serverless environments, short-lived web pages, etc. In this case you should explore the possibility of using the standalone validation code to compile all your schemas at build time. Ajv v7 improved the stability of generating standalone validation code and it is now supported for all schemas.

2. When a schema is generated dynamically for each validation (or to perform a small number of validations). There is no solution for such scenarios – Ajv (and any other validator that compiles schemas to code) is simply not a good fit for such scenarios, if the performance is critical. The main advantage of schema compilation is that the produced validation code is much faster than it would have been to interpret the schema. But if the schema is dynamic, then there is no benefit to the compilation – a validator that interprets the schema in the process of validation could be a better fit. While it may be 50-100 times slower to validate, it may still be faster than compiling schema to code. You need to run your own benchmarks and decide what is better for your application.

3. When you used Ajv incorrectly and compiled schema for each validation. The correct usage is either to use the same Ajv instance to manage both schemas and compiled validation functions, or to manage (cache) them in your application code. In Ajv v7 this incorrect usage is more likely to be noticeable both because of slower compilation speed and also because Ajv caches the functions using schema itself as a key and not it’s serialised presentation.

To summarize, if you use Ajv correctly, as it is intended, it will be both safer and faster, but if you use(d) it incorrectly it may become slower.

Caching compiled schemas

Ajv compiles schemas to validation code that is very fast, but the compilation itself is costly, so it is important to reuse compiled validation functions.

There are 2 possible approaches:

1. Compile schemas either at start time or on demand, lazily, and manage how validation functions are re-used in your application code:

 ```javascript
 const schema = require("mu_schema.json")
 const validate = ajv.compile(schema)
 // ...
 // in this case schema compilation happens
 // when app is started, before any request is processed
 async function processRequest(req) {
   if (!validate(req.body)) throw Error("bad request")
   // ...
 }
 ``` 

It is important that `ajv.compile` is used outside of any API endpoints, as otherwise ajv may recompile the schema every time it is used (depending whether you pass the same schema reference or not).

2. Add all schemas to Ajv instance, using it as a cache of compiled validation functions, later retrieve them using either the schema `$id` attribute from the schema or the key passed to `addSchema` method.

File `./my_schema.json`:

 ```json
 {
   "$id": "https://example.com/schemas/my_schema",
   "type": "object",
   "properties": {
     "foo": {
       "type": "string"
     }
   }
 }
 ``` 

Code:

 ```javascript
 const schema = require("./my_schema.json")
 ajv.addSchema(schema, "my_schema")
 // ...
 // schema compilation happens on demand
 // but only the first time the schema is used
 async function processRequest(req) {
   const validate = ajv.getSchema("https://example.com/schemas/my_schema")
   // or
   // const validate = ajv.getSchema("my_schema")
   if (!validate(req.body)) throw Error("bad request")
   // ...
 }
 ``` 

If you are passing exactly the same (and not just deeply equal) schema object to ajv, ajv would use a cached validation function anyway, using schema object reference as a key.

But if you pass the new instance of the schema, even if the contents of the object is deeply equal, ajv would compile it again. In version 6 Ajv used a serialized schema as a cache key, and it partially protected from the incorrect usage of compiled validation functions, but it had both performance and memory costs. Some users had this problem (https://github.com/ajv-validator/ajv/issues/1413) when migrating to version 7.

Validation of JSON Schema formats

Format validation has always been a difficult area, as it is not possible to find an optimal balance between validation performance, correctness and security – these objectives are contradictory, and, depending on your application, you would need a different approach to validate the same format.

JSON Schema specification evolved to the point of declaring format validation as an optional, opt-in behaviour, and Ajv v7 made the same choice – formats are now released as a separate package ajv-formats.

Unlike JSON Schema specifies, Ajv does not just quietly ignore formats – it would have been error-prone – you have to explicitly configure it for the desired behaviour (or do not use formats in the schemas).

You have several options:

1. Fully disable format validation with the option `validateFormats: false`. In this case, even if you use formats in the schema, they will be ignored.

2. Define the list of formats that you want to be ignored by passing `true` values for some formats in `formats` option:

 ```javascript
 new Ajv({formats: {email: true}})
 ``` 

The configuration above would allow and ignore `email` format in your schemas, but would still throw an exception if any other format is used. This approach is more performant than passing regular expressions `/.*/` or function `() => true` because they would have to be executed, and in case of `true` no validation code is generated when this format is used.

3. Use ajv-formats package – it includes all formats previously shipped as part of Ajv, some of the formats have two options – more performant and more correct (`fast` and `full` – see the docs to ajv-formats).

4. Define your own functions (or use some a 3rd party library) to validate formats that suit your application – you can pass functions to ajv for each format you use and you can even use asynchronous validation if, for example, you want to validate the existence and/or configuration of domain name as part of `email` or `hostname` validation.

The last approach to validate formats – defining your own functions or using a library – is strongly recommended as it allows you to achieve the right balance between validation security, speed and correctness that fits your application.

JSON Schema draft 4 should be used with version 6

Draft 4 of the JSON Schema is the first version that Ajv supported, and since then there were several important changes in the specification that made supporting multiple versions of JSON Schema in the same code unnecessarily complex.

JSON Schema draft 2019-09 has introduced further complexity, so the support for draft 4 was removed.

You can either continue using JSON Schema draft 4 with Ajv version 6, or if you want to have all the advantages of using Ajv version 7 you need to migrate your schemas – it is very simple with ajv-cli command line utility.

What is next

Thanks for continuing sponsorship from Mozilla, many new improvements are coming in Ajv – the new major version 8 will be released in a few months.

The most exciting new feature that was just released in version 7.1.0 is the support for the alternative specification for JSON validation – JSON Type Definition – it was approved as RFC8927 in November 2020. This is a much simpler and more restrictive standard than JSON Schema, and it enforces better data design for JSON APIs, prevents user mistakes and maps well to type systems of all major languages. See Choosing schema language section in Ajv readme for a detailed comparison.

Ajv version 8 will bring many additional features and stability improvements and also will support the changes in the most recent JSON Schema draft 2020-12.

The second exciting change that is coming soon is a new website for Ajv – to make the documentation more accessible and discoverable, and to make contributions easier.

Thanks a lot for supporting Ajv!

Pointer Events Polyfill (PEP) enters emeritus status at the OpenJS Foundation

By Announcement, Blog, Project Update

The Pointer Events Polyfill (PEP), originally part of the jQuery project family, is fully deprecating after 8 years. Current project maintainer, Patrick H. Lauke (who also chairs the W3C Pointer Events Working Group) worked with contributors to push the final stable and secure release to npm in December 2020.

The OpenJS Foundation is honored to have been the neutral home for PEP and is grateful for those who have kept the project up and running over the years. 

PEP History and Milestones

PEP is an early example of open source experimentation and developer adoption driving web standards development.

Originally part of Google’s Polymer Project, PEP gave developers an early opportunity to experiment with the ideas introduced by Microsoft’s W3C member submission for a Pointer Events specification – providing websites and application with a more cohesive way to handle DOM events from a variety of input devices – such as touch, mouse, and stylus – rather than having to handle use separate event models (mouse events and touch events) in parallel.

PEP came to join the jQuery Foundation on December 17, 2014 in order to ensure that the polyfill was maintained in a sustainable and browser-agnostic way, and that tool developers could use it as a path to implementation in all browsers.

Active development of PEP continued through the initial standardisation process, which also saw jQuery members directly involved in the W3C working group, and that led to the stable Pointer Events 1 specification in 2015. PEP played an important role in the Pointer Events standardisation process, allowing an early test-bed for both spec implementers and developers in the wider web community to familiarise themselves with the new standard.

PEP eventually came to the OpenJS Foundation by way of the JS Foundation, the successor of the jQuery Foundation.

The Pointer Events specification has since grown and evolved – with Pointer Events Level 2 reaching recommendation status in April 2019, and current development on Pointer Events Level 3. Many of the functionalities introduced in these newer versions were, unfortunately, too fundamental to be easily “patchable” with a polyfill, which gradually slowed development on PEP – focusing mostly on security patches and bug fixes.

However, while PEP may now be deprecated, the future of Pointer Events themselves is looking good, with the native API now supported in the majority of current browsers (see caniuse.com/pointer). For this reason, unless a project specifically targets older browser versions, we would strongly encourage developers to stop including PEP and to instead rely solely on native Pointer Events.

Thank yous

Open source projects don’t run, or archive, themselves. There are people behind the GitHub repos that ensure things run smoothly. We’d like to thank all the contributors of the project (including Daniel Freedman from the Polymer Project, Scott González who represented the jQuery Foundation on the W3C working group and led the bulk of the development during that time, and Patrick H. Lauke who coordinated the final release) for maintaining PEP over the years and for giving back to the open source community. 

Project News: WebdriverIO v7 Release

By Announcement, Blog, Project Update, WebdriverIO

Today, the WebdriverIO team has released v7! Webdriver is a hosted project at the OpenJS Foundation. To further grow the project, this new release brings with it an almost complete rewrite to the code base. With the v5 update, the project moved from a multi-repository setup to a mono-repo. This time, the rewrite of the code base is just as important and impactful but comes with almost no implications for the end user.

This major update will have the biggest impact on TypeScript users as types in all places have been updated and the way they are distributed has also been changed. As part of the rewrite, WebDriver has upgraded to Cucumber v7, which also moved its codebase to TypeScript. 

Hear from Christian Bromann, software engineer and core contributor of the Webdriver Project, as he explains some key updates:

Several updates include:

TypeScript Rewrite

Webdriver has rewritten the complete code base and almost touched all files to add type safety and to fix a lot of bugs on the way. This was a true community effort and would have taken much longer if they didn’t have so many people helping with code contributions. Bog thanks to the community for that ❤️! Before, WebdriverIO auto-generated all type definitions, which caused the creation of a lot of duplicate types and inconsistency. With this overhaul, all types are directly taken from the code itself and centralised in a single new helper package called @wdio/types. If you have been using TypeScript, you will now have much better type support for various commands and the configuration file.

Improved Google Lighthouse Integration

Since v6 WebdriverIO can run on the WebDriver protocol for true cross browser automation, but also automate specific browsers using browser APIs such as Chrome DevTools. This allows for interesting integrations into tools that allow broader testing capabilities such as Google Lighthouse. With the `@wdio/devtools-service`, WebdriverIO users were able to access these capabilities using Google Lighthouse to run performance tests. In this release we’ve also updated Google Lighthouse to the latest version to enable new performance metrics such as Cumulative Layout Shifts or First Input Delay.

New PWA Check Command

In addition,  Webdriver has deepened its integration to the tool and added audits for capturing the quality of your progressive web apps (PWA). These applications are built with modern web APIs to deliver enhanced capabilities, reliability, and installability while reaching anyone, anywhere, on any device with a single codebase. 

Webdriver will continue to add more integrations into tools like Google Lighthouse to provide more testing capabilities, e.g. accessibility, best practices and SEO.

New Docs

As you might already have seen, Webdriver has updated their docs to give this new release a brand new face. They have upgraded their Docusaurus setup to v2 and gave the whole design a new touch. Big shout out to Anton Meier for helping out and making the robot on the front page so lively.

Drop Node v10 Support

Webdriver has dropped support for Node v10, which was moved into a maintenance LTS phase by the Node.js team in May 2020. While this version still receives important security updates until April 2021, it is recommended to update your Node.js version to v14 or higher.

https://webdriver.io/blog/2021/02/09/webdriverio-v7-released/

If you have any questions don’t hesitate to start a conversation on the discussions page or join our growing support chat that has already reached 6.7k active members.

Node.js Mentorship Initiative: N-API Opportunity

By Announcement, Blog, Node.js, Project Update

This blog was written by A.J. Roberts and the Node.js Mentorship Initiative team. This post was first published on the Node.js blog. Node.js is an impact project at the OpenJS Foundation.

The Node.js Mentorship Initiative is happy to announce our next opportunity. This one is open to developers with experience in C++. You will work hand-in-hand with the N-API working group members with the eventual goal of becoming a full-fledged member of the working group.

If you’re not familiar with the working group, we recommend checking out their recent blog post.

The N-API working group has the goal of making it easier to develop native addons for Node.js and other runtimes like Electron. They have already accomplished a lot of crucially important work for the Native Addons ecosystem. You can help them accomplish even more by improving test coverage, adding features to N-API, and creating examples for native plugin authors to follow.

The N-API working group will set aside time specifically for helping and guiding you, so it’s definitely worth applying through the Mentorship Initiative if you feel like this would benefit you. In order to do that, you should complete the application and its included challenge. The challenge is expected to take 2–4 hours to complete.

Please apply here by January 29th, 2021. We look forward to seeing your submissions.

Electron Update: Community Discord Server and Hacktoberfest

By Blog, Electron, Project Update

This blog was originally posted on the Electron blog. Electron is an Impact Project at the OpenJS Foundation.

Community Discord Server and Hacktoberfest

Join us for community bonding and a month-long celebration of open-source.

Hacktoberfest and Discord banner

Electron Community Discord Launch

Electron’s Outreach Working Group is excited to announce the launch of our official community Discord server!

Why a new Discord server?

In its early days as the backbone of the Atom text editor, community discussion on the Electron framework occurred in a single channel in Atom’s Slack workspace. As time passed and the two projects were increasingly decoupled, the relevance of the Atom workspace to the Electron project decreased, and maintainer participation in the Slack channel declined in the same manner.

Up until now, we had still been redirecting our broader community to the Atom Slack workspace, even though we’ve had many reports from folks who have had trouble receiving invitations, and few of our core maintainers were frequenting the channel.

We’re setting up this shiny new server to be a central discussion hub for the community where you can get the latest news on all things Electron.

Get in here!

So far, the server’s membership consists of a few maintainers who have been working together to set it up, but we’re so excited to chat with you all! Come ask for help, keep up to date with Electron releases, or just hang out with other developers. We’ve got a handy invite for you that’ll give you access to the server!

Hacktoberfest 2020

As a large and long-running open-source project, Electron wouldn’t have been nearly as successful without all the contributions from its community, from code submissions to bug reports to documentation changes, and much more. That’s why we believe in the importance of participating in Hacktoberfest to usher in a wider community of developers of all skill levels into the project.

Odds and ends

This year, we don’t have a wider project to give you all to work on, but we’d like to focus on opportunities to contribute across the Electron JavaScript ecosystem.

Look out for issues tagged hacktoberfest across our various repositories, including the main electron/electron repository, the electron/electronjs.org website, electron/fiddle, and electron-userland/electron-forge!

P.S. If you’re feeling particularly adventurous, we also have a backlog of issues marked with help wanted tags if you’re looking for more of a challenge.

Stuck? Come chat with us!

Moreover, it’s also no coincidence that the grand opening of our Discord server coincides with the largest celebration of open-source software of the year. Check out the #hacktoberfest channel to ask for help on your Hacktoberfest PR. In case you missed it, here’s the invite link again!

Have feedback on this post? Let @electronjs know on Twitter.

Need help or found a bug? Contact us.

From streaming to studio: The evolution of Node.js at Netflix

By Blog, Case Study, Node.js, Project Update

As platforms grow, so do their needs. However, the core infrastructure is often not designed to handle these new challenges as it was optimized for a relatively simple task. Netflix, a member of the OpenJS Foundation, had to overcome this challenge as it evolved from a massive web streaming service to a content production platform. Guilherme Hermeto, Senior Platform Engineer at Netflix, spearheaded efforts to restructure the Netflix Node.js infrastructure to handle new functions while preserving the stability of the application. In his talk below, he walks through his work and provides resources and tips for developers encountering similar problems.

Check out the full presentation 

Netflix initially used Node.js to enable high volume web streaming to over 182 million subscribers. Their three goals with this early infrastructure was to provide observability (metrics), debuggability (diagnostic tools) and availability (service registration). The result was the NodeQuark infrastructure. An application gateway authenticates and routes requests to the NodeQuark service, which then communicates with APIs and formats responses that are sent back to the client. With NodeQuark, Netflix also created a managed experience — teams could create custom API experiences for specific devices. This allows the Netflix app to run seamlessly on different devices. 

Beyond streaming

However, Netflix wanted to move beyond web streaming and into content production. This posed several challenges to the NodeQuark infrastructure and the development team. Web streaming requires relatively few applications, but serves a huge user base. On the other hand, a content production platform houses a large number of applications that serve a limited userbase. Furthermore, a content production app must have multiple levels of security for employees, partners and users. An additional issue is that development for content production is ideally fast paced while platform releases are slow, iterative processes intended to ensure application stability. Grouping these two processes together seems difficult, but the alternative is to spend unnecessary time and effort building a completely separate infrastructure. 

Hermeto decided that in order to solve Netflix’s problems, he would need to use self-contained modules. In other words, plugins! By transitioning to plugins, the Netflix team was able to separate the infrastructure’s functions while still retaining the ability to reuse code shared between web streaming and content production. Hermeto then took plugin architecture to the next step by creating application profiles. The application profile is simply a list of plugins required by an application. The profile reads in these specific plugins and then exports a loaded array. Therefore, the risk of a plugin built for content production breaking the streaming application was reduced. Additionally, by sectioning code out into smaller pieces, the Netflix team was able to remove moving parts from the core system, improving stability. 

Looking ahead

In the future, Hermeto wants to allow teams to create specific application profiles that they can give to customers. Additionally, Netflix may be able to switch from application versions to application profiles as the code breaks into smaller and smaller pieces. 

To finish his talk, Hermeto gave his personal recommendations for open source projects that are useful for observability and debuggability. Essentially, a starting point for building out your own production-level application!  

Personal recommendations for open source projects 

Metrics and alerts: 

Centralized Logging 

Distributed tracing 

Diagnostics 

Exception Management