A recent technology trend is “Low-code/No-code tools” which helps developers who have great ideas and eagerness create their own applications. Node-RED is a great example of a Low-code/No-code tool. NodeRED is suitable for industrial IoT, Web of Things, smart city projects, education, and prototyping.
This conference features a great line up of speakers talking about their use cases and technologies using Node-RED. Talks will be in Japanese or English. The conference has been organized by Node-RED users groups Japan with input from community members around the globe including United Kingdom, Brazil, Indonesia, and Japan.
– “A mechanism to grow OSS Eco-System – Linux Foundation – OpenJS Foundation – Node-RED”, Noriaki Fukuyasu, Linux Foundation Japan
– “Looking to the future of Node-RED”, Nick O’Leary, IBM
– “DevOps with Node-RED. How to quickly turn an idea into a service.”, Masanori Usami, Uhuru
– “Node-RED suitable for education in the no-code era”, Wataru Yamazaki, Uhuru
– “Utilizing IoT interoperability and Node-RED based on the Web of Things standard”, Kunihiko Toumura, Hitachi, Ltd.
– And the other 15 great sessions!
More than 200 attendees have registered for the conference. Everyone can join the free event virtually via the web. We look forward to seeing you there, virtually!
The OpenJS In Action series features companies that use OpenJS Foundation projects to develop efficient, effective web technologies.
Esri uses OpenJS Foundation projects such as Dojo Toolkit, Grunt, ESLint and Intern to increase developer productivity and deliver high-quality applications that help the world fight back against the pandemic.
Esri’s contributions to the COVID response effort and an explanation of how they created the underlying technologies are available at this video:
Robin Ginn, Executive Director of the OpenJS Foundation, spoke with Kristian Ekenes, Product Engineer at Esri, to highlight the work his company has been doing. Esri normally creates mapping software, databases and tools to help businesses manage spatial data. However, Ekenes started work on a tool called Capacity Analysis when the COVID-19 pandemic began to spread.
Capacity Analysis is a configurable app that allows organizations to display and interact with results from two scenarios predicting a hospital’s ability to meet the demand of COVID-19 patients given configurable parameters, such as the percentage of people following social distancing guidelines. Health experts can create two hypothetical scenarios using one of two models: Penn Medicine’s COVID-19 Hospital Impact Model for Epidemics (CHIME) or the CDC’s COVID-19Surge model. Then they can deploy their own version of Capacity Analysis to view how demand for hospital beds, ICU beds, and ventilators varies by time and geography in each scenario. This tool is used by governments worldwide to better predict how the pandemic will challenge specific areas.
During the interview, Ekenes spoke on the challenges that come with taking on ambitious projects like Capacity Analysis. Esri has both a large developer team and a diverse ecosystem of applications. This makes it difficult to maintain consistency in the API and SDKs deployed across desktop and mobile platforms. To overcome these challenges, Esri utilizes several OpenJS Foundation projects including Dojo Toolkit, Grunt, ESLint and Intern.
Ekenes explained that Grunt and ESLint increase developer productivity by providing real-time feedback when writing code. The linter also standardizes work across developers by indicating when incorrect practices are being used. This reduces the number of pull requests between collaborators and saves time for the entire team. Intern allows developers to write testing modules and create high-quality apps by catching bugs early. In short, Esri helps ensure consistent and thoroughly tested applications by incorporating OpenJS Foundation projects into their work.
As platforms grow, so do their needs. However, the core infrastructure is often not designed to handle these new challenges as it was optimized for a relatively simple task. Netflix, a member of the OpenJS Foundation, had to overcome this challenge as it evolved from a massive web streaming service to a content production platform. Guilherme Hermeto, Senior Platform Engineer at Netflix, spearheaded efforts to restructure the Netflix Node.js infrastructure to handle new functions while preserving the stability of the application. In his talk below, he walks through his work and provides resources and tips for developers encountering similar problems.
Check out the full presentation
Netflix initially used Node.js to enable high volume web streaming to over 182 million subscribers. Their three goals with this early infrastructure was to provide observability (metrics), debuggability (diagnostic tools) and availability (service registration). The result was the NodeQuark infrastructure. An application gateway authenticates and routes requests to the NodeQuark service, which then communicates with APIs and formats responses that are sent back to the client. With NodeQuark, Netflix also created a managed experience — teams could create custom API experiences for specific devices. This allows the Netflix app to run seamlessly on different devices.
However, Netflix wanted to move beyond web streaming and into content production. This posed several challenges to the NodeQuark infrastructure and the development team. Web streaming requires relatively few applications, but serves a huge user base. On the other hand, a content production platform houses a large number of applications that serve a limited userbase. Furthermore, a content production app must have multiple levels of security for employees, partners and users. An additional issue is that development for content production is ideally fast paced while platform releases are slow, iterative processes intended to ensure application stability. Grouping these two processes together seems difficult, but the alternative is to spend unnecessary time and effort building a completely separate infrastructure.
Hermeto decided that in order to solve Netflix’s problems, he would need to use self-contained modules. In other words, plugins! By transitioning to plugins, the Netflix team was able to separate the infrastructure’s functions while still retaining the ability to reuse code shared between web streaming and content production. Hermeto then took plugin architecture to the next step by creating application profiles. The application profile is simply a list of plugins required by an application. The profile reads in these specific plugins and then exports a loaded array. Therefore, the risk of a plugin built for content production breaking the streaming application was reduced. Additionally, by sectioning code out into smaller pieces, the Netflix team was able to remove moving parts from the core system, improving stability.
In the future, Hermeto wants to allow teams to create specific application profiles that they can give to customers. Additionally, Netflix may be able to switch from application versions to application profiles as the code breaks into smaller and smaller pieces.
To finish his talk, Hermeto gave his personal recommendations for open source projects that are useful for observability and debuggability. Essentially, a starting point for building out your own production-level application!
This blog is a call to action for package maintainers in order to help in one of our initiatives and to move it forward, we need your help.
It’s been almost 2 years and we’ve been working on a number of initiatives which you can learn more about through the issues in the package-maintenance repo. Things never move quite as fast as we’d like, but we are making progress on a number of different areas.
One area that we identified was:
Building and documenting guidance, tools and processes that businesses can use to identify packages on which they depend, and then to use this information to be able to build a business case that supports their organization and developers helping to maintain those packages.
We started by looking at how to close the gap between maintainers and consumers in terms of expectations. Mismatched expectations can often be a source of friction and by providing a way to communicate the level of support behind a package we believe we can:
help maintainers communicate the level of support they can/want to provide, which versions of Node.js they plan to support going forward, and the current level backing in place to help keep development of the package moving forward.
reduce potential friction due to mismatched expectations between module maintainers and consumers
help consumers better understand the packages they depend on so that they can improve their planning and manage risk.
In terms of managing risk we hope that by helping consumers better understand the key packages they depend on, it will encourage them to support these packages in one or more ways:
encouraging their employees to help with the ongoing maintenance
provide funding to the existing maintainers
support the Foundation the packages are part of (for example the OpenJS Foundation)
After discussion at one of the Node.js Collaborator Summits where there was good support for the concept, the team has worked to define some additional metadata in the package.json in order to allow maintainers to communicate this information.
The TL/DR version is that it allows the maintainer to communicate:
target: the platform versions that the package maintainer aims to support. This is different from then existing engines field in that expresses a higher level intent like current LTS version for which the specific versions can change over time.
response: how quickly the maintainer chooses to, or is able to, respond to issues and contacts for that level of support
backing: how the project is supported, and how consumers can help support the project.
We completed the specification a while ago, but before asking maintainers to start adding the support information we wanted to provide some tooling to help validate that the information added was complete and valid. We’ve just finished the version of that tool which is called support.
The tool currently offers two commands which can be used which are:
The show commands displays a simple tree of the packages for an application and the raw support information for those packages. Much more sophisticated commands will make sense to help consumers review/understand the support info but at this point it’s more important to start to have the information filled in, as that is needed before more sophisticated analysis makes sense.
The validate command helps maintainers validate that they’ve added/defined the support information correctly. If there are errors or omissions it will let the maintainer know so that as support information is added it is high quality and complete.
Our call to action is for package maintainers to:
Review the support specification and give us your feedback if you have suggestions/comments.
Add support info to your package
Use the support tool in order to validate the support info and give us feedback on the tool
Let us know when you’ve added support info so that we can keep track of how well we are doing in terms of the ecosystem supporting the initiative, as well as knowing which real-world packages we can use/reference when building additional functionality into the support tool.
We hope to see the ecosystem start to provide this information and look forward to seeing what tooling people (including the package-maintenance working group and others) come up with to help achieve the goals outlined.
The OpenJS In Action series features companies that use OpenJS Foundation projects to help develop efficient, effective web technologies.
Expedia is an example of how adoption of new technologies and techniques can improve customer and developer experiences.
Robin Ginn, executive director of the OpenJS Foundation, interviewed Tiffany Le-Nguyen, Software Development Engineer at Expedia Group. Le-Nguyen explained how accessibility and performance concerns led developers to modernize Expedia’s infrastructure. One of the choices they made was to integrate ESLint into their testing pipeline to catch bugs and format input before content was pushed live. ESLint also proved to be a huge time-saver — it enforced development standards and warned developers when incorrect practices were being used.
However, Expedia is used globally to book properties and dates for trips. Users reserve properties with different currencies across different time zones. This makes it difficult to track when a property was reserved and whether the correct amount was paid. Luckily, Expedia was able to utilize Globalize, an OpenJS project that provides number formatting and parsing, date and time formatting and currency formatting for languages across the world. Le-Nguyen was able to simplify currency tracking across continents by integrating Globalize into the project.
To end the talk, Le-Nguyen suggested that web developers should take another look into UI testing. Modern testing tools have simplified the previously clunky process. Proper implementation of a good testing pipeline improves the developer experience and leads to a better end product for the user.
This post originally appeared on DZone on September 1, 2020.
messageformat is an OpenJS Foundation project that handles both pluralization and gender in applications. It helps keep messages in human-friendly formats, and can be the basis for tone and accuracy that are critical for applications. Pluralization and gender are not a simple challenge, and deciding on which message format to implement can be pushed down the priority list as development teams make decisions on resources. However, this can lead to tougher transitions later on in the process with both technology and vendor lock-in playing a role.
Quick note: The upstream spec is called ICU MessageFormat. ICU stands for International Components for Unicode: a set of portable libraries that are meant to make working with i18n easier for Java and C/C++ developers. If you’ve worked on a project with i18n/l10n, you may have used the ICU MessageFormat without knowing it.
How do formats deal with nuances in language?
It’s all about choices. Variance, e.g. how the greeting used by a program could vary from one instance to the next, gets dealt with by having the messaging format that you’re using support the ability to have choices. So you can have some random number coming in and depending on the choice of that random number, you select one of a number of choices. This functionality isn’t directly built into ICU MessageFormat, but it’s very easily implementable in a way that gets you results.
We need to decide how we deal with choices and whether you can have just a set number of different choice types. Is it a sort of generic function that you can define and then use? It’s an interesting question, but ICU MessageFormat doesn’t yet provide an easy, clear answer to that. But it provides a way of getting what you want.
What are the biggest problems with messaging formats?
Perhaps the biggest problem is that while ICU MessageFormat is the closest we have to a standard, that doesn’t mean it is in standard use by everyone. There are a number of different other standards. There are various versions that are used by a number of tools and workflows and other processes in terms of localization. The biggest challenge is that when you have some kind of interface and you want to present some messages in that interface, there isn’t one clear solution that’s always the right one for you.
And then it also becomes challenging because, for the most part, almost any solution that you end up with will solve most of the problems that you have. This is the scope in which it’s easy to get lock-in. Effectively, if you have a workflow that works with one standard or one set of tools or one format that you’re using, then you have some sort of limitation. Eventually, at some point, you will want to do something that your systems aren’t supporting. You can feel like it’s a big cost to change that system, and therefore you make do with what you have, and then you get a suboptimal workflow and a suboptimal result. Eventually, your interface and whole project may not work as well.
It’s easy to look at messageformat and go, “That’s too complicated for us, let’s pick something simpler.” You end up being stuck with “that something simpler” for the duration of whatever it is that you’re working on.
You’re forced to make a decision between two bad options. So the biggest challenge is it would be nice to have everyone agree that “this is the right thing to do” and do it from the start! (laughs)
But of course that is never going to happen. When you start building an interface like that, you start with just having a JSON file with keys and messages. That will work for a long time, for a really great variety of interfaces, but it starts breaking at some point, and then you start fixing it, and then your fix has become your own custom bespoke localization system.
Is technology lock-in a bigger problem than vendor lock-in?
Technology lock-in is the largest challenge. Of course there is vendor lock-in as well, because there are plenty of companies offering their solutions and tools and systems for making all of this work and once you’ve started using them, you’re committed. Many of them use different standards than messageformat, their own custom ones.
In the Unicode working group where I’m active, we are essentially talking about messageformat 2. How do we take the existing ICU MessageFormat specification and improve upon it? How do we make it easier to use? How do we make sure there’s better tooling around it? What sorts of features do we want to add or even remove from the language as we’re doing this work?
Does using TypeScript help or hurt with localization?
But of course in TypeScript you need to be much more clear about what the shape of that message is. And, if for whatever reason not everything is a string, then it gets complicated.
Should open source projects build their own solution for localization?
There are a number – like three or four – whole stacks of tooling for various environments for localization. And these are the sorts of things that you should be looking at, rather than writing your own.
How is the OpenJS Foundation helping with localization?
Well, along with messageformat OpenJS hosts Globalize which utilizes the official Unicode CLDR JSON data.
The greatest benefit that I or the messageformat project is currently getting from the OpenJS Foundation is that the Standards Working Group is quite active. And with their support, I’m actively participating in the Unicode Consortium working group I mentioned earlier where we are effectively developing the next version of the specification for messageformat.
How far off is the next specification for messageformat?
It’s definitely a work in progress. We have regular monthly video calls and are making good progress otherwise. I would guess that we might get something in actual code maybe next year. But it may be actually longer than that for the messageformat to become standard and ready.
How will localization be handled differently in 3-5 years?
Effectively, this is also coming back to what the OpenJS Foundation is supporting. What I’m primarily trying to push with messageformat is to make the whole project obsolete! Right now we’re working on messageformat 3, which is a refactoring of some breaking changes. But hopefully a later version will be a polyfill for the actual Intl.MessageFormat functionality that will come out at some point.
On a larger scale, it’s hard to predict how much non-textual interfaces are going to become a more active part of our lives. When you’re developing an application that uses an interface that isn’t textual, what considerations do you really need to bring in and how do you really design everything to work around that? When we’re talking about Google Assistant, Siri, Amazon Echo, their primary interface is really language, messages. Those need to be supported by some sort of backing structure. So can that be messageformat?
Some of the people working on these systems are actively participating in the messageformat 2 specifications work. And through that, we are definitely keeping that within the scope of what we’re hoping to do.
Try it out now
To install the core messageformat package, use:
npm install –save-dev messageformat@next
This includes the MessageFormat compiler and a runtime accessor class that provides a slightly nicer API for working with larger numbers of messages. More information: messageformat.github.io/messageformat/v3
The following blog was originally posted on APM.dev. AMP is a growth project at the OpenJS Foundation. It was posted by Tobie Langel and Jory Burson on behalf of the AMP AC.
AMP’s development is stewarded by an open governance system of multiple working groups and two committees: the Technical Steering Committee (TSC), which is responsible for the project’s direction and day-to-day operations, and the Advisory Committee (AC), which provide valuable perspective, stakeholder input, and advice to the TSC and to working groups.
An important goal of this governance structure is to ensure that the voices of those who do not contribute code to AMP, but are nonetheless impacted by it, get heard. This responsibility is shared across all contributors and governing bodies but falls particularly on the shoulders of the AC, which has a duty to ensure its membership is as representative and diverse as possible.
The AC is opening its call for nominations. We hope you will consider joining it. The AC is looking for candidates who will bring new perspectives and insight from AMP’s various constituencies: the industries adopting AMP to deliver content to their users, such as the publishing industry and e-commerce; the tech vendors whose solutions help power these industries, such as content delivery networks, web agencies, tooling providers, or payment providers; the industry experts focusing on topics such as accessibility, internationalization, performance, or standardization; and end-users and their representatives, such as consumer advocates. This list is by no means exhaustive. All candidates are welcomed and encouraged to apply, but preference will be given to those who bring an underrepresented perspective to the AC and help broaden its horizon.
The AC is fully distributed—it spans over nine time zones—working asynchronously over email and GitHub issues (although its work isn’t technical in nature). It meets every other week over video conference, and ordinarily every six months face to face (note that there are no face-to-face meetings planned for the remainder of 2020 or the first half of 2021 at this time).
We will be collecting new applications until Friday, October 2 at 23:59 AoE (Anywhere on Earth). The AC will elect its new members through a consensus based process and will announce the results on Monday, October 20.
The following blog is based on a talk given at the OpenJS Foundation’s annual OpenJS World event and covers solutions created with Node.js.
When someone proposes a complicated, expensive solution ask yourself: can it be done cheaper, better and/or faster? Last year, an external vendor wanted to charge $103,000 to create an interactive form and store the responses. Ryan Hillard, Systems Developer at the U.S. Small Business Administration, was brought in to create a less expensive, low-maintenance alternative to the vendor’s proposal. Hillard was able to create a solution using ~320 lines of code and $3000. In the talk below Hillard describes what the difficulties were and how his Node.js solution fixed the problem.
Last year, Hillard started work on a government’s case management system that received and processed feedback from external and internal users. Unfortunately, a recent upgrade and rigorous security measures prevented external users from leaving feedback. Hillard needed to create a secure interactive form and then store the data. However, the solution also needed to be cheap, easy to maintain and stable.
Hillard decided to use three common services: Amazon Simple Storage Service (S3), Amazon Web Services (AWS) Lambda and Node.js. Together, these pieces provided a simple and versatile way to capture and then store response data. Maintenance is low because the servers are maintained by Amazon. Additionally, future developers can easily alter and improve the process as all three services/languages are commonly used.
Developers should ask themselves how they can solve their problems without introducing anything new. In other words, size does matter — the smallest, simplest toolkit is the best!
In the middle of a project it can be difficult to identify redundant or problematic sections of code. Frustrations compound when multiple developers, all using different styles, need to collaborate and write using a unified format. However, the aforementioned problems can be avoided with gentle reminders and automated feedback.
This is where linters come in. Linters are essentially spell checkers for code. They highlight syntax errors, refactoring opportunities and style/formatting issues as code is written. This helps developers to notice and fix small errors before they become a major problem. ESLint is an especially powerful tool that identifies problematic sections of code based on the user’s plugins, rules and configurations. Due to its flexibility, it has become an incredibly popular addition to many projects. Kai Cataldo, a maintainer of ESLint, and Brandon Mills, a member of the Technical Steering Committee, answered questions and explained how to get started with the OpenJS project in a recent AMA.
In the talk, Cataldo clarified that ESLint is designed to help developers write the code they want. It should not tell them whether their code is “right” or “wrong” unless the error is code-breaking. The default settings of ESLint help developers to recognize syntax errors early on, but the tool can be made more powerful with the addition of user defined rules or downloadable configurations. Additionally, teams can standardize code across a team by defining a set of rules for the linter. Therefore, developers save time by writing consistent code that can be easily understood by other members.
Cataldo and Mills also revealed future plans for ESLint — updated documentation, simplified configuration and parallel linting. They also discussed common problems of linters and how developers can contribute to the project to make ESLint even more powerful.
In this AMA, we discussed the benefits of the OpenJS Node.js certification program. The certification tests a developer’s knowledge of Node.js and allows them to quickly establish their credibility and value in the job market. Robin Ginn, OpenJS Foundation Executive Director, served as the moderator. David Clements, Technical Lead of OpenJS Certifications, and Adrian Estrada, VP of Engineering at NodeSource, answered questions posed by the community. The full AMA is available at the link below:
The OpenJS Foundation offers two certifications: OpenJS Node.js Application Developer (JSNAD) and OpenJS Node.js Services Developer (JSNSD). The Application Developer certification tests general knowledge of Node.js (file systems, streams etc.). On the other hand, the Services Developer certification asks developers to create basic Node services that might be required by a startup or enterprise. Services might include server setup and developing security to protect against malicious user input.
In the talk, Clements and Estrada discussed why they created the certifications. They wanted to create an absolute measure of practical skill to help developers stand out and ease the difficulties of hiring for the industry. To that end, OpenJS certifications are relatively cheap and applicable to real world problems encountered in startup and enterprise environments.
A timestamped summary of the video is available below:
Note: If you are not familiar with the basics of the two certifications offered by the OpenJS Foundation, jumping to the two bolded sections may be a good place to start.