lloyd.io is the personal website of Lloyd Hilaiel, a software engineer who works for Team Ozlo and lives in Denver.

All the stuff you'll find here is available under a CC BY-SA 3.0 license (use it and change it, just don't lie about who wrote it). Icons on this site are commercially available from steedicons.com. Fonts used are available in Google's Web Font directory, and I'm using Ubuntu and Lekton. Finally, Jekyll is used for site rendering.

Finally, Atul, Pascal, and Stephen inspired the site's design. And in case you're interested, this site's code is available on github.

2016-05-11 00:00:00 -0700

Meet Ozlo.

Say hello to Ozlo!

(Animated Ozlo, cred @foxattacks)

After over two years, our company has finally unveiled our hard work. We set out to rethink mobile search, and the result is Ozlo, an intelligent conversational AI. With this step we join a number of technology startups and behemoths alike who believe that an emphasis on language and conversation, rather than more pixels and pointers, is the interface of the future.

Ozlo is a focused product that helps you find food and drink via an interface that feels like text messaging. You type what you want, and via a directed conversation you iteratively hone in on something delightful. You can get a higher-level overview of the product on our blog, and you can sign up today for our invite-only beta.

2014-02-03 00:00:00 -0800

My first day at Mozilla was August 16th 2010, my last will be February 14th 2014.

My first day as a Mozillian was either in November 2004, or it might have been around January 2008. I will not have a last day as a Mozillian.

2014-01-17 00:00:00 -0800

After a couple years of experience, we (the identity community that supports Persona) have amassed a small collection of changes to the data formats which underly the BrowserID protocol. This post is a migration guide for maintainers of BrowserID client implementations.

2013-05-15 00:00:00 -0700

I've worked on the Persona service for over two years now. My involvement began on a fateful airplane ride with the father of the specification that would become Persona. Shortly after, on April 6th 2011 I made the first commit. On July 1st 2011 Mike Hanson and I had built a prototype, made some decisions, and I wrote a post on what it is and how it works. Shortly after, Ben Adida joined me and we began carefully hiring wonderful people.

2013-04-12 00:00:00 -0700

Mozilla Persona is a open authentication system for the web that will eliminate per-site passwords. I work on the team that supports Persona, and this post will describe how we will accomplish a uber-high availability deployment of the persona service with server deployments on every continent. The ultimate goal is fantastic availability, extremely low latency worldwide, and to preserve our tradition of zero downtime updates.

2012-12-21 00:00:00 -0800

Yesterday I put together a community meeting for the Persona project, which is an authentication system for the web that allows site owners to "outsource the handling of passwords" and implement a highly usable log in system in a couple hours.

2012-06-15 00:00:00 -0700

The Persona Login service lets web developers implement seamless login via email address with a trivial amount of code. It is written in NodeJS and is supported by Mozilla. The service is deployed as several distinct NodeJS processes. Recently we've added a new process to the service, and this short post will describe what's changed and why.

2012-02-17 00:00:00 -0800

(This post was collaboratively written with Ben Adida, Austin King, Shane Tomlinson, and Dan Mills)

There are important features in BrowserID that existing API prevents us from implementing. This post motivates and proposes a new API building off the work of the Mozilla community and other BrowserID engineers.

2012-02-16 00:00:00 -0800

This post explores proposed BrowserID features that cannot be implemented with the current API. I'll talk about four important features that would require a change to the API, and discuss the precise ways in which it would need to change.

2012-02-06 00:00:00 -0800

One challenge in building production systems with NodeJS, something that we've dealt with in BrowserID, is finding and fixing memory leaks. In our case we've discovered and fixed memory leaks in various different areas in our own code, node.js core, and 3rd party libraries. This post will lay bare some of our learnings in BrowserID and present a new node.js library that's targeted of the problem of identifying when your process is leaking, earlier.

2011-10-17 00:00:00 -0700

(This post builds on the work of Mike Hanson, Ben Adida, Dan Mills, and the mozilla community)

BrowserID is designed to be a distributed authentication system. Once fully deployed there will be no central servers required for the system to function, and the process of authenticating to a website will require minimal live communication between the user's browser, the identity provider (IdP), and the website being logged into. In order to achieve this, two things are required:

  1. Browser vendors must build native support for BrowserID into their products.
  2. IdP's must build BrowserID support to vouch for their users ownership of issued email addresses.

This post addresses #2, proposing precisely what BrowserID Support means for an IdP, and how it works.

2011-08-01 00:00:00 -0700

This post will describe the release management processes of the BrowserID project, which is based on gitflow. The BrowserID project has recently become a high visibility venture, with many sites actively using the service for authentication. This overnight change from experiment to production presents us with a dilemma: the project is still in early stages and needs to be able to rapidly iterate and evolve, but at the same time it must be stable and available for those who have started using it. This post describes how we'll resolve this dilemma.

2011-07-26 00:00:00 -0700

BrowserID poses interesting user experience problems. The first release was sufficiently complete to provide a usable system, but we've learned a lot from community feedback and user testing. Now we can do better. This post proposes a new set of UX diagrams intended to solve several concrete UX problems. The goal of this post is to start a discussion which will lead us to incremental improvments in BrowserID's UX.

2011-07-01 00:00:00 -0700

(a special thanks to Mike Hanson and Ben Adida for their careful review of this post)

BrowserID is a decentralized identity system that makes it possible for users to prove ownership of email addresses in a secure manner, without requiring per-site passwords. BrowserID is hoped to ultimately become an alternative to the tradition of ad-hoc application-level authentication based on site-specific user-names and passwords. BrowserID is built by Mozilla, and implements a variant of the verified email protocol (originally proposed by Mike Hanson, and refined by Dan Mills and others).

2011-06-13 00:00:00 -0700

This post provides a tiny recipe for small scale site deployment with git. If you have a small, mostly static website that you develop using git, and you would like to streamline the publishing of the site to a server that you control, then this post is for you.

2011-06-09 00:00:00 -0700

Mozilla's Chromeless project is an experiment toward building desktop applications with web technologies. So far, it's been more of a fancy-free exploration of interesting features or applications than the serious and sometimes stodgy stuff that platforms are made of. A recent surge of community interest in the project, however, suggests that the best path forward is for the primary developers of the platform to buckle down and focus on producing a stable system upon which others can experiment, play, and ship products.

This post attempts to define a Minimum Viable Product for Chromeless: the simplest possible set of requirements for a meaningful 1.0.

2011-06-02 00:00:00 -0700

JSONSelect is a query language for JSON. With JSONSelect you can write small patterns that match against JSON documents. The language is mostly an adaptation of CSS to JSON, motivated by the belief that CSS does a fine job and is widely understood.

2011-04-26 00:00:00 -0700

YAJL is a little sax style JSON parser written in C (conforming to C99). The first iteration was put together in a couple evening/weekend hacking sessions, and YAJL sat in version zero for about two years (2007-2009), quietly delighting a small number of folks with extreme JSON parsing needs. On April 1st 2009 YAJL was tagged 1.0.0 (apparently that was a joke, because the same day it hit version 1.0.2).

Given 2 years seems to YAJL’s natural period for major version bumps, I’m happy to announce YAJL 2, which is available now. This post will cover the changes and features in the new version.

2011-02-11 00:00:00 -0800

There has been a ton of development in the Mozilla Labs Chromeless project since the 0.1 release, and I wanted to take a moment to give a snapshot of our progress.

2010-11-22 00:00:00 -0800

In the month since we announced “Open Web Apps”, there’s been a lot of discussion around the particulars of the Mozilla proposal.

I specifically wanted to take a minute to jot down some of the proposed changes to the application manifest format from our initial design. The changes detailed here range from the drastic to the mundane, and have been contributed by my co-workers at mozilla and several community members.

2010-10-25 00:00:00 -0700

Lately I’ve been collaborating with Marcio Galli on the chromeless project in Mozilla Labs, and one thing I like about the approach is that it leverages huge swaths of the jetpack platform.

2010-09-20 00:00:00 -0700

This post presents JSChannel, a little open source JavaScript library that sits atop HTML5’s cross-document messaging and provides rich messaging semantics and an ergonomic API.

2010-08-12 00:00:00 -0700

This post lightly explores the problem of “automatically” backing up a git repository to subversion. Why would anyone want to do this? Well, if your organization has a policy that all code must make it into subversion, but your team is interested in leveraging git in a deeper way than just by using git-svn as a sexy subversion client, then you’ll find yourself pondering the question of repository synchronization.

2010-08-06 00:00:00 -0700

As I spend more and more of my free time digging around in reptilian mailing lists and such, I find that I’ve begun feeling itchy. Here are four little itches that might be interesting for someone to scratch, micro-projects if you will:

2010-08-04 00:00:00 -0700
Whereby I leave Yahoo!, and join Mozilla...
2010-01-31 00:00:00 -0800

(originally posted on the Yahoo Developer Network)

In recent years, we've seen increased energy put into web extensibility platforms. These platforms let distributed developers collaborate to produce new kinds of interactive features on websites and in the web browser itself. Because these platforms frequently enable data-sharing between multiple distinct organizations, and often sit between two completely different security domains (desktop vs. web), the security and privacy issues that arise are complex and interesting. This post explores some of that complexity: both the current state of platforms that extend the web and their associated security challenges.

2010-01-13 00:00:00 -0800
-> ![Web Security](http://lloyd.io/i/websec.png) <-
2010-01-07 00:00:00 -0800

A graphical pontification on how the web actually works...

2009-10-06 00:00:00 -0700

Recently I proposed orderly, an idea for a small microlanguage on top of JSONSchema — something easier to read and write.

There’s been some great feedback which I find encouraging. In response I’ve set up orderly-json.org and started a project on github which will host the specification, the reference implementation, and all of the contents of the json-orderly.org site.

2009-10-02 00:00:00 -0700

I’ve always wanted a concise and beautiful schema language for JSON. This desire stems from a real world need that I’ve hit repeatedly. Given in-memory data that has been hydrated from a stream of JSON, of questionable quality, validation is required. Currently I’m constantly performing JSON validation in an ad-hoc manner, that is laboriously writing boiler plate code validating that an input JSON document is of the form that I expect.

2009-09-25 00:00:00 -0700

There are many reasons why git-svn integration is interesting, and most of them are sociological. Here are some situations where git-svn integration can be useful:

2009-09-24 00:00:00 -0700
2009-09-23 00:00:00 -0700
[lth@clover sup]$ diff /usr/lib/ruby/gems/1.9.1/gems/lockfile-1.4.3/lib/lockfile.rb{~,}
475c475
<       buf.each do |line|
---
>       buf.split($/).each do |line|
2009-09-16 00:00:00 -0700

In fiddling more and more with whiz bang HTML drag and drop (in safari 4.x and Firefox 3.5), some things caught me by surprise, primarily because I had already had an idea about "how drag and drop works" that wasn't from the web world. Specifically, in BrowserPlus we invented a very simple model for a web developer to express interest in capturing desktop sourced file drags. Our model was motivated more by ease of implementation and simplicity than by deep adherence to the "precedent" set by browser vendors. At that point there wasn't all that much in the way of precedent....

2009-09-11 00:00:00 -0700

In fiddling around with HTML5 desktop sourced drag and drop, present in Safari Version 4.0.3 (6531.9), I’m faced with the interesting challenge of understanding when I can trust that a drop is really a drop – that a File is the result of user interaction. For a little context, here’s a bit of code cobbled up by Gordon Durand that’ll let us capture desktop sourced drops in the latest snow leopard:

2009-09-11 00:00:00 -0700

Really not much to write about, it was trivial to do, and feels a hell of a lot faster than the burning fox. Steps?

First, grab the latest build from the chrome buildbot

Second, probably notice that the chrome binary won’t run for you… missing shared libs? Heeey, me too! Apparently we’re building with certain debug libs here. use ldd to figger out what’s missin, and go create some symlinks:

[lth@clover chrome-linux]$ ldd chrome | egrep \\.[0-9]d
    libnss3.so.1d => /usr/lib/libnss3.so.1d (0x00007fe64a846000)
    libnssutil3.so.1d => /usr/lib/libnssutil3.so.1d (0x00007fe64a628000)
    libsmime3.so.1d => /usr/lib/libsmime3.so.1d (0x00007fe64a3fd000)
    libssl3.so.1d => /usr/lib/libssl3.so.1d (0x00007fe64a1cd000)
    libplds4.so.0d => /usr/lib/libplds4.so.0d (0x00007fe649fca000)
    libplc4.so.0d => /usr/lib/libplc4.so.0d (0x00007fe649dc6000)
    libnspr4.so.0d => /usr/lib/libnspr4.so.0d (0x00007fe649b8a000)
2009-09-09 00:00:00 -0700

A little make magic (leveraging redcloth and htmldoc) seems to have done the trick. Now high quality print output it ain't, but a good start!

bridges.pdf: bridges.html
    htmldoc -t pdf14 --webpage bridges.html > bridges.pdf

bridges.html: bridges.textile redcloth < bridges.textile > bridges.html

.PHONY: view view: bridges.pdf xpdf bridges.pdf

.PHONY: clean clean: @rm -f bridges.html bridges.pdf *~

2009-09-04 00:00:00 -0700

Earlier today I was impressing my wife with some unix foo by automatically swapping FIRST LAST —> LAST, FIRST formatted data while sorting and finding duplicated entries (ok, so she was only mildly impressed). The shell command looked a little like this:

2009-09-03 00:00:00 -0700

Later this week I'll be moderating a chat entitled "Implementing the open web" at gluecon in my home town, Denver.

So in preparation for this panel, the logical first step seems to be to establish a clear and concrete definition for the "open web"...

2009-09-03 00:00:00 -0700

Crazy claim, eh? I figure there's no better way to get this claim tested that by posting it as a truth!

2009-09-03 00:00:00 -0700

Recently we spent a little time optimizing some servers. These are linux machines running apache serving static and dynamic content using php. Each apache process consumes 13mb of private resident memory under load and has a gigabit net connection. A sample bit of “large static content” is 2mb. Assume cl\ ients consuming that content need about 20s to get it down (100kb/s or so). That means we need to be spoon feeding about 2000 simultaneously connected clients in order to saturate the gigabit connection.

So turn up MaxClients, right? 13mb * 2000 (and a recompile, btw) about 26gb of RAM. uh. that’s not gonna work.

So there’s lots of ways to solve this problem, but before we start thinking about that, how would we simulate such a load so that we can validate the existence of this bottleneck now, and it’s resolution once we fix it?

seige is a great little bit of software that can simulate load:

siege-2.69/src/siege -c 200 -f url.txt
2009-09-03 00:00:00 -0700

Expiration of my account at hub.org, and my discovery of mosso & slicehost has prompted me to move all my personal shit around… Along with that move I figured I might as well disband my efforts at a ground up implementation of every piece of technology required to run a site, and just throw apache, php, and a little wordpress at the problem (sorry erlang & yaws, I still love you. don’t hate me ruby & lighttpd, you guys are really cute!). This is a common theme in my life, get stuck in the interesting problems that pop up while trying to solve a problem…

anyhow, welcome! we will shortly return to your previously scheduled programming (woteva that woz)…

lloyd

2009-09-03 00:00:00 -0700

A set of questions that can be used as a jump off point for discussions and learning about computers the internet, and programming.

Theory Questions:

  1. What is assembly language?
  2. What’s the difference between a compiled and interpreted programming language?
  3. At a very high level, how do computers communicate over a network?
  4. what is the “emacs vs vi” discussion? What are emacs and vi?
  5. What is SSH? What is telnet? Why is ssh better than telnet?
2008-05-15 00:00:00 -0700

Seeing all this action in ruby trunk, combined with what I’ve read ’round the net had piqued my interest in 1.9 performance differences.

Given the set of contributed benchmarks that I used when developing the inital patch to improve the reclamation and decrease memory usage of ruby, I did some comparisons of ruby 1.9 vs ruby 1.8.6, and of ruby 1.9 vs a patched ruby 1.8.6.

In short, looking at this data leads me to some preliminary conclusions:

  • 1.9 is decidedly “faster” than 1.8.6. Especially when runtimes are longer, or yaml is involved.
  • 1.9 uses slightly less memory overall. *There is considerable room for improvement in 1.9’s memory reclamation.
2008-05-01 00:00:00 -0700

w00t. An email from matz, and a little spelunking in the ruby subversion repository shows that there’s some tinkering going on in ruby garbage collection land. Here are the interesting change logs:

r15674 | matz | 2008-03-03 01:27:43 -0700 (Mon, 03 Mar 2008) | 5 lines

* gc.c (add_heap): sort heaps array in ascending order to use
  binary search.

* gc.c (is_pointer_to_heap): use binary search to identify object
  in heaps.  works better when number of heap segments grow big.
2008-02-07 00:00:00 -0800

This page details some changes to the ruby garbage collector which seem to afford a 25% reduction in maximum heap memory usage, and nearly double the amount of heap space ruby’s is able to reclaim. This comes at the cost of a 2% performance hit. More to come, stay tuned.

2007-12-22 00:00:00 -0800

Here’s the Problem:

http://lists.apple.com/archives/x11-users/2007/Oct/msg00065.html

Here’s a proposed “fix”:

http://aaroniba.net/articles/x11-leopard.html

If they didn’t add RPATH support, DTrace, and pretty much avoid judicious changes, I’d be throwing stones. As it stands, this is extremely annoying, but tolerable. Looking forward to the fix…

-lloyd

2007-04-22 00:00:00 -0700

Ruby’s GC & heap implementation uses a lot of memory. The thing is based around the idea of “heaps”. Heaps are chunks of memory where ruby objects are stored. Each heap consists of a number of slots. Slots are between 20 and 40 bytes, depending on sizeof(long). When ruby runs out of heap space, it first does a GC run to try to free something up, and then allocates a new heap. the new heap is 1.8 times larger than the last. Every time a GC run happens, the entire heap is written to turn off mark bits, these are stored in the heap. Then we run through top level objects, and mark them, and all their descendents. Then we throw away anything that’s not marked (sweep). Because of the way ruby works, objects may never be moved around in heaps. That means from the time they’re allocated to the time they’re freed they may not be moved to a new memory address.

2006-08-31 00:00:00 -0700

I cannot live without X11 emacs! It doesn’t build from macports right now. As far as I can tell, the emacs that apple ships with leopard is broken, at least for me after upgrade I get:

[lth@tumno ~] $ /usr/bin/emacs.broken
Fatal malloc_jumpstart() error