messageformat is Working Hard to Make Themselves Obsolete

By Blog, In The News, messageformat, Project Update

This post originally appeared on DZone on September 1, 2020.

messageformat is an OpenJS Foundation project that handles both pluralization and gender in applications. It helps keep messages in human-friendly formats, and can be the basis for tone and accuracy that are critical for applications. Pluralization and gender are not a simple challenge, and deciding on which message format to implement can be pushed down the priority list as development teams make decisions on resources. However, this can lead to tougher transitions later on in the process with both technology and vendor lock-in playing a role. 

Quick note: The upstream spec is called ICU MessageFormat. ICU stands for International Components for Unicode: a set of portable libraries that are meant to make working with i18n easier for Java and C/C++ developers. If you’ve worked on a project with i18n/l10n, you may have used the ICU MessageFormat without knowing it. 

To find out more about messageformat, I spoke with Eemeli Aro, Software Developer at Vincit, and OpenJS Cross Project Council (CPC) member. Aro maintains the messageformat libraries, and actively participates in various efforts to improve JavaScript localization. Aro spoke on “The State of the Art in Localization” at last year’s Node+JS Interactive. Aro is an active participant in ECMA-402 processes, runs the monthly HelsinkiJS meetups, and helps organise React Finland conferences. 

How do formats deal with nuances in language? 

It’s all about choices. Variance, e.g. how the greeting used by a program could vary from one instance to the next, gets dealt with by having the messaging format that you’re using support the ability to have choices. So you can have some random number coming in and depending on the choice of that random number, you select one of a number of choices. This functionality isn’t directly built into ICU MessageFormat, but it’s very easily implementable in a way that gets you results. 

We need to decide how we deal with choices and whether you can have just a set number of different choice types. Is it a sort of generic function that you can define and then use? It’s an interesting question, but ICU MessageFormat doesn’t yet provide an easy, clear answer to that. But it provides a way of getting what you want. 

What are the biggest problems with messaging formats?

Perhaps the biggest problem is that while ICU MessageFormat is the closest we have to a standard, that doesn’t mean it is in standard use by everyone. There are a number of different other standards. There are various versions that are used by a number of tools and workflows and other processes in terms of localization. The biggest challenge is that when you have some kind of interface and you want to present some messages in that interface, there isn’t one clear solution that’s always the right one for you. 

And then it also becomes challenging because, for the most part, almost any solution that you end up with will solve most of the problems that you have. This is the scope in which it’s easy to get lock-in. Effectively, if you have a workflow that works with one standard or one set of tools or one format that you’re using, then you have some sort of limitation. Eventually, at some point, you will want to do something that your systems aren’t supporting. You can feel like it’s a big cost to change that system, and therefore you make do with what you have, and then you get a suboptimal workflow and a suboptimal result. Eventually, your interface and whole project may not work as well. 

It’s easy to look at messageformat and go, “That’s too complicated for us, let’s pick something simpler.” You end up being stuck with “that something simpler” for the duration of whatever it is that you’re working on. 

You’re forced to make a decision between two bad options. So the biggest challenge is it would be nice to have everyone agree that “this is the right thing to do” and do it from the start! (laughs) 

But of course that is never going to happen. When you start building an interface like that, you start with just having a JSON file with keys and messages. That will work for a long time, for a really great variety of interfaces, but it starts breaking at some point, and then you start fixing it, and then your fix has become your own custom bespoke localization system. 

Is technology lock-in a bigger problem than vendor lock-in? 

Technology lock-in is the largest challenge. Of course there is vendor lock-in as well, because there are plenty of companies offering their solutions and tools and systems for making all of this work and once you’ve started using them, you’re committed. Many of them use different standards than messageformat, their own custom ones. 

In the Unicode working group where I’m active, we are essentially talking about messageformat 2. How do we take the existing ICU MessageFormat specification and improve upon it? How do we make it easier to use? How do we make sure there’s better tooling around it? What sorts of features do we want to add or even remove from the language as we’re doing this work? 

messageformat, the library that I maintain and is an OpenJS project, is a JavaScript implementation of ICU MessageFormat. It tries to follow the specification as close as it can. 

Does using TypeScript help or hurt with localization? 

For the most part, it works pretty well. TypeScript brings in an interesting question of “How do you type these messages that you’re getting out of whatever system you’re using?” TypeScript itself doesn’t provide for plugins at the parser level, so you can’t define that. When there’s input in JavaScript, for example, for a specific file, then you can use specific tools for the different types that are coming out of it. Because messages aren’t usually one by one by one. You have messages in collections, so if you get one message out of a collection in JavaScript, you can make very safe assumptions about what the shape of that message is going to be. 

But of course in TypeScript you need to be much more clear about what the shape of that message is. And, if for whatever reason not everything is a string, then it gets complicated. 

It’s entirely manageable. You can use JavaScript tools for localization in a TypeScript environment, there are just these edge cases that could have better solutions than we currently have but work on those kind of requires some work on TypeScript behalf as well.

Should open source projects build their own solution for localization? 

I think this is one of those cases where it’s good to realize that this is JavaScript. If there’s a problem you can express briefly, you go look and you’ll find five competing solutions that are all valid in one way or another. Whatever your problem or issue is, it is highly likely that you will find someone else has already solved your problem for you, you just need to figure out how to adapt their solution to your exact problem. 

There are a number – like three or four – whole stacks of tooling for various environments for localization. And these are the sorts of things that you should be looking at, rather than writing your own. 

How is the OpenJS Foundation helping with localization?

Well, along with messageformat OpenJS hosts Globalize which utilizes the official Unicode CLDR JSON data. 

The greatest benefit that I or the messageformat project is currently getting from the OpenJS Foundation is that the Standards Working Group is quite active. And with their support, I’m actively participating in the Unicode Consortium working group I mentioned earlier where we are effectively developing the next version of the specification for messageformat. 

How far off is the next specification for messageformat?

It’s definitely a work in progress. We have regular monthly video calls and are making good progress otherwise. I would guess that we might get something in actual code maybe next year. But it may be actually longer than that for the messageformat to become standard and ready. 

How will localization be handled differently in 3-5 years? 

The messageformat working group didn’t start out under Unicode, it started out under ECMA-402. That whole work started from looking to see what we should do about adding support for messageformat to JavaScript. And this is one of the main expected benefits to come out of the Unicode messageformat working group. In the scope of 3-5 years, it is reasonable to assume that we are going to have something like intl.MessageFormat as a core component in JavaScript, which will be great! 

Effectively, this is also coming back to what the OpenJS Foundation is supporting. What I’m primarily trying to push with messageformat is to make the whole project obsolete! Right now we’re working on messageformat 3, which is a refactoring of some breaking changes. But hopefully a later version will be a polyfill for the actual Intl.MessageFormat functionality that will come out at some point. 

On a larger scale, it’s hard to predict how much non-textual interfaces are going to become a more active part of our lives. When you’re developing an application that uses an interface that isn’t textual, what considerations do you really need to bring in and how do you really design everything to work around that? When we’re talking about Google Assistant, Siri, Amazon Echo, their primary interface is really language, messages. Those need to be supported by some sort of backing structure. So can that be messageformat? 

Some of the people working on these systems are actively participating in the messageformat 2 specifications work. And through that, we are definitely keeping that within the scope of what we’re hoping to do. 

Try it out now

To install the core messageformat package, use:

npm install –save-dev messageformat@next

This includes the MessageFormat compiler and a runtime accessor class that provides a slightly nicer API for working with larger numbers of messages. More information:

AMP Advisory Committee 2020 election

By AMP, Blog

The following blog was originally posted on AMP is a growth project at the OpenJS Foundation. It was posted by Tobie Langel and Jory Burson on behalf of the AMP AC. 

AMP’s development is stewarded by an open governance system of multiple working groups and two committees: the Technical Steering Committee (TSC), which is responsible for the project’s direction and day-to-day operations, and the Advisory Committee (AC), which provide valuable perspective, stakeholder input, and advice to the TSC and to working groups.

An important goal of this governance structure is to ensure that the voices of those who do not contribute code to AMP, but are nonetheless impacted by it, get heard. This responsibility is shared across all contributors and governing bodies but falls particularly on the shoulders of the AC, which has a duty to ensure its membership is as representative and diverse as possible.

The AC is opening its call for nominations. We hope you will consider joining it. The AC is looking for candidates who will bring new perspectives and insight from AMP’s various constituencies: the industries adopting AMP to deliver content to their users, such as the publishing industry and e-commerce; the tech vendors whose solutions help power these industries, such as content delivery networks, web agencies, tooling providers, or payment providers; the industry experts focusing on topics such as accessibility, internationalization, performance, or standardization; and end-users and their representatives, such as consumer advocates. This list is by no means exhaustive. All candidates are welcomed and encouraged to apply, but preference will be given to those who bring an underrepresented perspective to the AC and help broaden its horizon.

For a better understanding of the AC’s work and membership responsibilities, read the member expectations document, working mode document, and minutes of past meetings. Also have a look at its GitHub repository and in particular its project board.

The AC is fully distributed—it spans over nine time zones—working asynchronously over email and GitHub issues (although its work isn’t technical in nature). It meets every other week over video conference, and ordinarily every six months face to face (note that there are no face-to-face meetings planned for the remainder of 2020 or the first half of 2021 at this time).

If you have questions, feel free to reach out to Tobie Langel or Jory Burson, AC facilitators.

If you’re interested in joining the AC, please apply.

We will be collecting new applications until Friday, October 2 at 23:59 AoE (Anywhere on Earth). The AC will elect its new members through a consensus based process and will announce the results on Monday, October 20.

How Node.js saved the U.S. Government $100K

By Blog, Case Study, Node.js, OpenJS World

The following blog is based on a talk given at the OpenJS Foundation’s annual OpenJS World event and covers solutions created with Node.js.

When someone proposes a complicated, expensive solution ask yourself: can it be done cheaper, better and/or faster? Last year, an external vendor wanted to charge $103,000 to create an interactive form and store the responses. Ryan Hillard, Systems Developer at the U.S. Small Business Administration, was brought in to create a less expensive, low-maintenance alternative to the vendor’s proposal. Hillard was able to create a solution using ~320 lines of code and $3000. In the talk below Hillard describes what the difficulties were and how his Node.js solution fixed the problem.

Last year, Hillard started work on a government’s case management system that received and processed feedback from external and internal users. Unfortunately, a recent upgrade and rigorous security measures prevented external users from leaving feedback. Hillard needed to create a secure interactive form and then store the data. However, the solution also needed to be cheap, easy to maintain and stable. 

Hillard decided to use three common services: Amazon Simple Storage Service (S3), Amazon Web Services (AWS) Lambda and Node.js. Together, these pieces provided a simple and versatile way to capture and then store response data. Maintenance is low because the servers are maintained by Amazon. Additionally, future developers can easily alter and improve the process as all three services/languages are commonly used. 

To end his talk, Hillard discussed the design and workflow processes that led him to his solution. He compares JavaScript to a giant toolkit with hundreds of libraries and dependencies — a tool for every purpose. However, this variety can be counterproductive as the complexity – and thus the management time – increases.

Developers should ask themselves how they can solve their problems without introducing anything new. In other words, size does matter — the smallest, simplest toolkit is the best!

How ESLint Helps Developers to Write Better Code

By AMA, Blog, ESLint

In the OpenJS Foundation “Ask Me Anything” (AMA) series, we get to hear from many inspiring leaders in the JavaScript community. We will highlight the key questions answered by the panel members and provide resources to help developers save time and improve their code. This month, we feature ESLint.

In the middle of a project it can be difficult to identify redundant or problematic sections of code. Frustrations compound when multiple developers, all using different styles, need to collaborate and write using a unified format. However, the aforementioned problems can be avoided with gentle reminders and automated feedback.

This is where linters come in. Linters are essentially spell checkers for code. They highlight syntax errors, refactoring opportunities and style/formatting issues as code is written. This helps developers to notice and fix small errors before they become a major problem. ESLint is an especially powerful tool that identifies problematic sections of code based on the user’s plugins, rules and configurations. Due to its flexibility, it has become an incredibly popular addition to many projects. Kai Cataldo, a maintainer of ESLint, and Brandon Mills, a member of the Technical Steering Committee, answered questions and explained how to get started with the OpenJS project in a recent AMA. 

In the talk, Cataldo clarified that ESLint is designed to help developers write the code they want. It should not tell them whether their code is “right” or “wrong” unless the error is code-breaking. The default settings of ESLint help developers to recognize syntax errors early on, but the tool can be made more powerful with the addition of user defined rules or downloadable configurations. Additionally, teams can standardize code across a team by defining a set of rules for the linter. Therefore, developers save time by writing consistent code that can be easily understood by other members. 

Cataldo and Mills also revealed future plans for ESLint — updated documentation, simplified configuration and parallel linting. They also discussed common problems of linters and how developers can contribute to the project to make ESLint even more powerful. 

Full AMA Replay

You can find the full AMA broken up by section below: 

0:57 Member Introductions 

3:19 Background of ESLint

5:55 What is a linter?

11:42 Why ESLint is moving away from correcting styling “errors” 

13:32 Future plans for ESLint

21:57 Will you add HTML linting? 

26:15 Exciting features in newest release

27:35 What is the most controversial linting rule? 

29:39 How to get started

32:25 Why doesn’t ESLint have default configurations? 

36:44 How to contribute

For those interested in becoming involved in the projects please check out the following resources:

OpenJS Foundation AMA: Node.js Certifications

By AMA, Blog, Certification, Node.js

In this AMA, we discussed the benefits of the OpenJS Node.js certification program. The certification tests a developer’s knowledge of Node.js and allows them to quickly establish their credibility and value in the job market. Robin Ginn, OpenJS Foundation Executive Director, served as the moderator. David Clements, Technical Lead of OpenJS Certifications, and Adrian Estrada, VP of Engineering at NodeSource, answered questions posed by the community. The full AMA is available at the link below: 

The OpenJS Foundation offers two certifications: OpenJS Node.js Application Developer (JSNAD) and OpenJS Node.js Services Developer (JSNSD). The Application Developer certification tests general knowledge of Node.js (file systems, streams etc.). On the other hand, the Services Developer certification asks developers to create basic Node services that might be required by a startup or enterprise. Services might include server setup and developing security to protect against malicious user input. 

In the talk, Clements and Estrada discussed why they created the certifications. They wanted to create an absolute measure of practical skill to help developers stand out and ease the difficulties of hiring for the industry. To that end, OpenJS certifications are relatively cheap and applicable to real world problems encountered in startup and enterprise environments. 

A timestamped summary of the video is available below: 

Note: If you are not familiar with the basics of the two certifications offered by the OpenJS Foundation, jumping to the two bolded sections may be a good place to start.

AMA Topics

Introductions 0:20

How did the members start working together? 2:35

How did work on the certifications start? 5:07

Is it possible to have feedback on the exam? 9:50

Applications of psychometric analysis 12:26

What is the Node.js Application Developer certification + Services Developer certification? 14:54

How do you take the exam? What should you expect? 18:22

Will there be differential pricing between countries? 22:04

How is the criteria for new npm packages chosen? 24:55

Are test takers able to use Google or mdn? 31:52

What benefits do OpenJS certifications have for developers? 33:22

How to use the certification after completion 39:43

What are the exam principles? 40:56

How much experience is required for the exam? 44:12 

Course available in Chinese 49:09

How will new Node versions affect the certifications? 53:43 

Closing thoughts 56:35

Node.js announces new mentorship opportunity

By Blog, Node.js

This post was written by A.A. Sobaki and the Node.js Mentorship Initiative. It first appeared on the project’s blog. Node.js is an Impact Project at the OpenJS Foundation.

The Node.js Mentorship Initiative is excited to announce a new mentee opening! We’d like to invite experienced developers to apply to join the Node.js Examples Initiative.

If you’re not familiar, the Examples Initiative’s mission is to build and maintain a repository of runnable, tested examples that go beyond “hello, world!” This is an important place to find practical and real-world examples of how to use the runtime in production.

Being a part of the Examples Initiative is a big opportunity. As a mentee, you will work with and learn from industry leaders and world-class software engineers. You will receive personalized guidance as you write code that will serve as a template for countless developers as they begin to use Node.js in their projects.

To get started, complete the application and coding challenge linked below. The coding challenge is a chance to showcase your skills. It is estimated it will take between 2–4 hours to complete the challenge.

Click the link to get started.

We look forward to receiving your application.

Michael Dawson elected Community Director

By Announcement, Blog

The OpenJS Foundation is delighted to announce that Michael Dawson has been elected to the OpenJS Board as the CPC Director, a community representation seat.  

Chosen by the Cross Project Council, Michael brings a wealth of experience to the board having acted as the Node.js Project TSC Chair, a member of the Node.js Community Committee, being an active contributor to the Node.js Project and being active on the CPC. Michael previously held the Node.js representative seat on the Board. This community seat replaces that designation.

In a statement provided to the community via GitHub, Michael says, “My goal as a board member is to bring the perspective of the foundation projects and greater community to the board while ensuring the needs of foundation projects are considered in the decisions that are made.”

Additionally, in being elected, Micahel plans to prioritize communication between the board and community, seek input on board decisions, and help champion broader and longer-term initiatives that are important to the success of the foundation. 

As a community representative to the OpenJS Board, Michael looks forward to taking what he’s learned from his work with Node.js, the CPC and recent collaborator summits to represent the broader community. Michael adds, “I look forward to broadening my role and representing more projects. In my role as the Node.js Board rep, the TSC and Comm Comm found board updates helpful. This could be beneficial for other projects and I would be happy to work to find the right way to provide these updates.”

Michael is IBM’s community lead for Node.js, where he works to coordinate and lead the work of IBM’s teams that contribute to the Node.js community.  He also works to support IBM’s many initiatives to provide great deployment options (public and private) for Node.js, ensuring the tools and products IBM delivers provide a first class experience for Node.js developers and supporting IBM’s internal and customer Node.js deployments.

Michael is based in Ottawa, Ontario, Canada. Outside of the office he enjoys playing badminton and softball as well as kayaking and paddle boarding. Extracurriculars also include building things with 3d printers, cnc machines, soldering irons and building apps to make daily life more fun.

Project Update: Official AMP Plugin for WordPress

By AMP, Announcement, Blog, Project Update

Success with WordPress,
powered by AMP

The missions of the WordPress and AMP open source projects are very well aligned. AMP, a growth project at the OpenJS Foundation, seeks to democratize performance and the building of great page experiences, which is at the core of WordPress’ goal of democratizing web publishing. 

Today the AMP team is very excited to release v2.0 of the Official AMP Plugin for WordPress! Lots of work went into this release, and it is loaded with many improvements and new capabilities in the areas of usability, performance, and flexibility. Read on to learn more, or check out the official AMP Blog for the full release notes.

AMP brings “performance-as-a-service” to WordPress, providing out-of-the-box solutions, a wide range of coding and performance best practices, always up-to-date access to the latest web platform capabilities, and effective control mechanisms (e.g. guard rails) to enable consistently good performance. AMP’s capabilities, and the guard rails it provides allow WordPress creators to take advantage of the openness and flexibility of WordPress while minimizing the amount of resources needed to be invested in developing and maintaining sites that perform consistently well. 

The Official AMP Plugin for WordPress is developed and maintained by AMP project contributors to bring the pillars of AMP content publishing at the fingertips of WordPress users, by:

  1. Automating as much as possible the process of generating AMP-valid markup, letting users follow workflows that are as close as possible to the standard workflows on WordPress they are used to.
  2. Providing effective validation tools to help dealing with AMP incompatibilities when they happen, including aspects of identifying errors, contextualizing them, and reporting them accurately.
  3. Providing support for AMP development to make it easier for WordPress developers to build AMP compatible ecosystem components, and build websites and solutions with AMP-compatibility built in.
  4. Supporting the serving of AMP pages on Origin, making it easier for site owners to take advantage of mobile redirection, AMP-to-AMP linking, minimization of AMP validation issues to surfaced in Search Console, and generation of optimized AMP pages by default.
  5. Providing turnkey solutions for segments of WordPress creators and publishers to be able to go from zero to AMP content generation in no time, regardless of technical expertise or availability of resources. 

To learn more about the AMP in WordPress, please check the release post on the official AMP Project Blog. If you haven’t tried it already, download the plugin today and get started on the path of consistently great performance for your WordPress site! And, if you are interested in becoming a contributor to the AMP Plugin for WordPress, you can engage with us in the AMP plugin github repository

Fastify: Graduation, performance and the future

By Blog, Fastify, Project Update

Fastify is moving from Incubation stage to a Growth Project! Within the OpenJS Foundation, this is a major step forward.

New projects at OpenJS start as incubation projects while maintainers complete the on-boarding checklist to join the Foundation. This includes documenting its infrastructure, transferring the IP, and adopting the OpenJS Code of Conduct. When a project graduates, they’ve readied their project for Foundation support. At OpenJS, we share best practices and reduce redundant administrative work across projects, such as non-technical governance, to help projects grow.

The Cross Project Council (CPC) centralizes coordination among projects as well as certain technical governance and moderation processes, and oversees the progression of projects between stages of their life cycles. Fastify has passed all of its requirements and we are happy to welcome them as a Growth Project!

To find out more about Fastify and what’s next, we talked with Matteo Collina, one of the Lead Maintainers of the Fastify team, Technical Director at NearForm, Node.js Technical Steering Committee member and OpenJS Foundation Cross Project Council member.

Are there any benchmarks that people should pay attention to regarding web performance?

I often say that “performance does not matter, until it absolutely does.” Most websites and applications do not need to be fast or scale to thousands of servers. Most developers at small and big companies alike will not (and should not) care about performance at all. Their bigger concerns are maintainability and speed of delivery. As a result, applications become bigger and slower.

As an example, a lot of developers care only about the latency of a single request when the server is idle, completely avoiding the latency and load introduced by server-side processing. Providing a snappy user experience requires both the front end and the backend to work in concert and play to each other strengths.

What are the most important metrics people should pay attention to with regard to web performance (faster networks, run time)?

The most important metric for Node.js applications is event loop latency. We define this as the time needed to process some part of an incoming http request. The higher the throughput of our application, the smaller this needs to be. Let’s make a quick example. Let’s imagine a Node.js server that can process an http request in 10 milliseconds of CPU time. Do you think this server is fast? Given that most deployments have 1 CPU per container (or even less), we can say that a single container can process around 100 requests per second.

However, we cannot say if our server is fast or slow, as it depends on the load. If our server will receive less than 100 req/sec it would appear snappy and “fast.” But if it’s over 100 req/sec, the service will now “lag behind” and the latency of every request will start increasing.

Fastify helps deploy Node.js applications at scale by applying some load-shedding techniques in the under-pressure plugin. Essentially if the server is busy it will start rejecting requests: the load balancer will try to serve them from another instance.

Now that Fastify has graduated incubation, what’s next for the project in terms of big milestones?

We’ll rest and recover! The last few months has been a race to ship Fastify v3, and now we are graduating!

It’s time to start planning Fastify v4 for 2021.

OpenJS World Keynote Series: Exploring the History of JavaScript

By Blog, OpenJS World

During the OpenJS Foundation global conference OpenJS World, Alex Williams at The New Stack had the opportunity to hear from one of the leaders in the JavaScript world, Allen Wirfs-Brock. 

Allen Wirfs-Brock served as project editor for the ECMAScript Language Specification, the international standard that defines the latest version of the JavaScript programming language. Fortunately for developers, Brock has greatly improved JavaScript through his contributions to EcmaScript 5, 5.1 and 6. Alex Williams, founder of The New Stack, interviewed Brock to review the history of JavaScript and understand how relatively unusual practices became fundamental to the language.

First-time developers often incorrectly assume that JavaScript has something to do with Java. Brock explains that Netscape, the producers of JavaScript, and Sun Microsystems, the producers of Java, formed a partnership in 1995 to combat Microsoft’s advances into the web market. JavaScript was originally positioned to be a simple scripting language companion for the more robust Java language, even though the two languages had many differences. 

The naming convention aside, JavaScript quickly outgrew its “companion” status and became a powerful development tool. However, Brock recounts how the language was initially developed with a “worse is better” mentality in order to quickly take advantage of an emerging web platform. Despite its problems, JavaScript continued to grow even as “reformation” attempts tried and failed to fix the language. 

To end the talk, Brock explains how he approached JavaScript’s issues in 2008, and managed to fix many of the issues the language had. His work at EcmaScript was a counterintuitive, but ultimately successful process that provided a way to move forward and build upon the existing framework. 

You can find the Keynote broken down by section below: 

Introduction 0:00

History of Programming Conference 0:27

JavaScript: The Most Misunderstood Programming Language 1:33

Netscape + Java = Dead Windows 3:12

Early Impacts of JavaScript 04:19

Unique/Key Players in Early JavaScript 06:05

“Worse is Better” 7:27

Browser Game Theory Developing 7:55

Diverging Design Efforts 10:24

The Failed JavaScript Reformation 12:04

Improving JavaScript: Early Failures 13:53

Moving Forward: EcmaScript 3.1 (renamed 5) 15:58

The Present and Future of JavaScript 17:05

Conclusion 19:15