Interview with Arturo "Buanzo" Busleiman, developer of Enigform

Interview with Arturo "Buanzo" Busleiman, developer of Enigform

In most industries, innovation comes from big companies that invest large amounts of money in equipment and research. The IT industry is different: the only real investment is a PC—and copious amounts of time necessary to study and research. (Without free software it could have been a very different story today, since we could live in a world where you couldn’t program without forking out several thousands of dollars just for a compiler. Does anybody remember how much the first version of Visual C++ cost?)

In computers, the most important leaps forward are often made by single (outstanding) individuals. I’ve had a chance to talk to Arturo "Buanzo" Busleiman, who wrote Enigform. If Enigform becomes a standard, it could change the way everybody logs onto their internet banking sites and more. He’s the best person to talk about Enigform... so, here he is.

TM: Arturo, please tell us about yourself!

I’m a 25 year old Argentinian GNU/Linux user, admin and developer who really loves information security, programming and guitar playing (I play the guitar and sing in a punk-rock band, check it out people!). I’ve been “Linuxating” since late 1994, and GNU/Linux and the FLOSS movement helped me to become what I am today: a full grown geek. (Yes, I do have a wife, and children, too).

Arturo Busleiman, developer of EnigformArturo Busleiman, developer of Enigform

TM: You developed Enigform... can you give us a very simple description of what it does?

Enigform is a Mozilla Firefox extension that implements OpenPGP signing (and in the future, encrypting) of HTTP requests, allowing web servers (or proxies, for example) to authenticate the identity and integrity of the request. It uses GnuPG, and works on any operating system where Mozilla and GnuPG can run.

TM: What about a more detailed/technical description?

GnuPG allows its users to sign and encrypt files, but it’s mostly used for email. If I send you a signed email, and you have my public key, then you can verify that I have sent it. I can also encrypt it so you are the only one who can decrypt it and thus read it. Both operations can be combined, of course. That’s quite cool, because signing also gives the ability of verifying the contents of the message. Email forgery is not valid here.

Imagine all those benefits for web browsing! No Phishing, no session hijacking, no man-in-the-middle attacks! Alternatives to SSL. A safer browsing experience. Better eCommerce. Imagine Paypal or Google Checkout or Home Banking where both the user and the server can identify each other with the same level of confidence we can expect from PGP Signed/Encrypted email...

Of course, server-side support is required. That’s why I had to develop mod_auth_openpgp, an Apache module that you can enable for a server or virtual host (per-directory support is on its way).

Enigform does not break HTTP in any way, it just adds a set of request headers. Servers that do not support the OpenPGP extensions will not benefit from them, but won’t be affected either.

TM: So, it needs an Apache module to actually work. What about other web servers? And on the client side, what about IE, Opera, etc.?

Regarding Microsoft’s IIS... well, I wouldn’t code for it myself, but could provide support to other programmers. However, I’m willing to port mod_auth_openpgp to other open source, freedom-based httpd implementations. In any case, it just depends on GPGME (a library of GnuPG functions that is quite easy to use), so should be an easy task.

Opera has a plugin API, and I quite like it because it runs on many mobile platforms, which is a great thing. I’d like to support it just because of that, even though it’s not open source (yet!).

IE 7 now has a plugin API. I guess someone will port Enigform... In any case, I don’t think IE (and Windows as we know it) will live much longer. I’ve always believed that if Microsoft went GPL, they would really benefit. Who knows.

TM: Who will benefit from this module? Can you give us a couple of use cases?

Both users and web developers/admins/etc. Signing HTTP requests (encryption is still not supported) allows web programmers, webmasters and administrators to authorize access to private content, or simply to identify users. I like to think that Phishing (as no user and password are required!) and Man in the Middle attacks can be rendered obsolete by this. Some people have suggested that it could also help in scenarios where HTTP traffic must be encrypted. SSL is used for this, but for the whole certificates process to work you need one public IP per virtual host. As Enigform works on the application layer, not at socket layer, it can even work behind accelerators, reverse proxies, etc.

TM: So, people would have only one set of keys (private/public), and then provide their public key to their bank, their favourite shopping site, and so on (assuming that they support Enigform). Which also means that they will only have one password to remember—the password to their private key. Is this correct?

You got it right! Enigform can also make use of the gpg-agent, so it is not necessary to input the passphrase for every request. Enigform can also store the passphrase in memory, but using the gpg-agent is far better in terms of security.

TM: Enigform will basically confirm that the key belongs to the right user. However, the OpenPGP architecture doesn’t have a central authority to have a list of “confirmed” identities. Is this a problem? If I were (for example) a bank, and wanted to use Enigform, what would I have to do in order to get my customers to use it?

First of all, “trust” comes in a different flavor under the OpenPGP mechanism. I can sign other people’s public keys, making them more trust worthy. And there are even documents on “How to organize a pgp key-signing party”. Yes, it’s completely different, but I think that’s the good thing: it is an alternative. Not a different implementation, but a whole different concept that fully differs from SSL and that could actually coexist with it.

For example, Banks are requesting their users to type their Home Baking password using a “virtual keyboard” that appears on-screen. This has already been hacked, and for different browsers and banks. With Enigform, banks could just request users to submit their public key to a keyserver, or send it via email to some special email address, and then tell them to type in their pgp keyid at the neareast ATM. How does this differ from current procedures? Instead of setting the Home Banking password at the ATM, you specify your OpenPGP Key ID. No password is necessary. Not even a username. Of course, an attacker would have to steal the user’s private keyring AND the passphrase, but once the user’s computer is compromised... no security mechanism applies.

TM: I am a little concerned about mobile devices, that might be too small to actually have OpenPGP. How would you address that? (If every bank used it, I wouldn’t be able to do internet banking with my tiny tablet!

Well, GnuPG is currently not working with Windows CE, for example, but Palm has announced the switch to a Linux-based OS... thus GnuPG will potentially be supported. Mobile is not my field (... yet! I’m from Argentina, I’ve not attended a security conference yet because of money issues, so... Mobile? What’s that?!)

TM: Does Enigform actually need to deal with encryption as well? I mean, you could use Enigform to verify the user’s identity, and transfer information through HTTPS, right?

Of course. But when HTTPS is not an option, or when you only want certain requests to be encrypted thus avoiding the bandwidth payload, then request encryption is clearly an interesting alternative. Additionally, the benefit of avoiding the high cost of SSL certificates is definitely nice. Well, we could also talk about the cryptographic benefits of using the algorithms and keysizes an OpenPGP implementation allows, but... :)

TM: Is there anything equivalent in the free software world? What about the proprietary world?

The only thing remotely similar that I’ve found so far (and I’ve really googled for this) is a CCI PGP extension documented here. I believe its from 1995...

TM: I understand you want it to become an accepted standard. How is the approval process going?

Some guys from the IETF OpenPGP Working Group are helping me with some procedural details, but I expect to have it submitted to the IETF by July this year (2007). Hopefully, it can become an RFC. That would probably make Enigform and other plugins obsolete, because the browser itself could/should support it. I’d love to focus on the server-side, really.

TM: What are your future plans about Enigform/mod_auth_openpgp?

Well, I want to enhance Enigform itself with some per-domain, or regex based, rules like “always sign for this domain using key 0xdeadb33f”, provide some key-exchange negotiation mechanism, and support it in reverse: the browsing user can verify the remote web server. If verification works both ways, then it would simply be fantastic. Absolutely no more phishing! Of course, to add request encryption is pretty high up in my TODO list.

TM: For this to work, it will need widespread adoption. Is it easy for the user to actually set their public key? Does the plugin help the users create a keypair? And, if/when this makes it into normal browsers, do you think it will be easy enough to use for common users?

Well Tony, once you have configured GnuPG, and once your keypair is ready, Enigform is ready to be used. If no specific key-id is chosen in Enigform’s Preferences Dialog, then the default one is used. It does not help the user in any way, but Enigform 1.0.0 has not yet been released either! My TODO list has a “gnupg wizard” item, but it’s not high on my priority list. As you say, for this to work, it will need widespread adoption. If I don’t focus on the HTTP-PGP process fully, then it will become an unstable platform. I’d rather define a clear workflow (i.e. the Internet Draft), and then implement better user interface elements. Of course, if anyone is willing to help, then some user-friendly features could be added asap. If it ever makes it into normal browsers, then each one will provide its own OpenPGP implementation and/or wizards, and all of that would be presented to the user in the format and design they’re used to seeing in those browsers. I trust the design and development team at Mozilla and Opera! :)

TM: Thank you so much for your answers, Arturo. Best of luck with your project and with its acceptance as a standard!



undefined's picture
Submitted by undefined on

HTTP sessions can right now be encrypted and authenticated using TLS (or SSL), but nobody i know does it (here in the US). and i'm not talking about authenticating the server, but the client.

SSL is used for this, but for the whole certificates process to work you need one public IP per virtual host.

TLS requires a unique IP address? no; a unique endpoint (IP and port) is sufficient. isn't specifying something other than port 80 too difficult for users? it's easier than generating and properly handling a public-private key pair. and people just go to their bank's home page and find the redirecting form (bad!) or link for login to the "secure" web site, so why can't that be replaced with a link that has a port number in the url? and for most businesses willing to go through the expense of client-side auth (TLS, PGP, etc), a dedicated IP address is nothing.

Enigform works on the application layer, not at socket layer, it can even work behind accelerators, reverse proxies, etc.

yes, because SSL does encryption at the transport layer the application layer is obscured, so there's less flexibility (you can only play games at the transport layer, not the application layer). but usually the main concern is load balancing which is possible if you distribute connections based on TLS sessions and share the same cert across all web servers. it may even be possible in practice to just distribute TCP streams, ignoring TLS sessions.

the high cost of SSL certificates

yes, if you want 3rd party verification, you need to use a recognized certificate authority, but there's nothing to stop a bank from becoming their own authority. and certificate authorities are not even necessary if the trustworthiness of the certificate was established out-of-band (eg in person at bank branch office, by physical mail, etc). certificate authorities are only so by trusting a single entity you can implicitly trust all issued certificates (without verification of each individual certificate), but as long as my bank trusts my cert (by verifying it ahead of time), it accomplishes the same. and PGP's partial/accumulative trust might work for social networks, but i don't see it as beneficial/necessary for banks that want me to sign on the dotted line, not 2 or 3 people they trust to know me.

we could also talk about the cryptographic benefits of using the algorithms and keysizes an OpenPGP implementation allows

please elaborate on what cryptographic benefits PGP has over TLS, as GPG and OpenSSL, common implementations of each standard, support the widely used RSA and AES algorithms in 2048 & 256 bit keysizes, respectively. TLS is currently limited to MD5 & SHA1, the two most popular hashes, but both TLS & OpenPGP (RFC 2440) lack SHA2 (though draft RFCs addressing SHA2 exist for both). yes, OpenPGP allows for less popular algorithms, but i don't see that as a big cryptographic advantage (as "less popular" means "less analyzed").

and there is a draft for using OpenPGP keys for TLS but who knows if it will go anywhere.

HTTP-PGP would allow for more granular use of encryption/authentication in HTTP than TLS, but securing the entire transport seems good enough. MIME supports the fine granularity of mixing OpenPGP and plain texts in the same email message, but i haven't seen an email application do that but instead just apply OpenPGP to the entire email; probably because securing the entire email is "good enough".

is HTTP-PGP complimentary and/or supplimentary to TLS and are choices important? yes. has TLS been widely adopted for client-side auth and does HTTP-PGP have any more chance of being used? no.

i just don't see any attribute of HTTP-PGP being so much better than TLS that it sees more uptake than TLS.

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

well, from the interview, i'd say it's just an alternative to ssl/tls but only in some aspects and in some environments.

having alternatives is good, and one using OpenPGP looks just fine. in any case, installing enigform (yeah,i tried it, why commenting if you haven't tried it, anyway) in is painless, and configuring mod auth pgp on apache is quite easy, too.

i think the best benefit is that it allows to avoid the concept of "login". for the sake of testing, i wrote a simple web site, i added the public keys of some test users to the keyring at the server. they didn't have to login, just get to the site, and they were automagically logged-on, identified by their keyid, fingerprint or email address.

yeah i liked it, i'm waiting for mod_auth_openpgp to grow some more (having auto public-key importing would be cool,and a management interface via a sethandler!).

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

GnuPG provides for "digital signatures", as well as other things. Digital signatures provide for "authentication" and "integrity". Leaving aside discussions of SSL/TLS/GnuPG encryption and "single-sign-on", another use for GnuPG is the production of a digital signature for any given "message". This is one function that TLS/SSL don't make use of today, as far as I know.

The possible ability to apply and recover message authentication and integrity would itself be a big boon in many applications, even without considering "web of trust" issues with GnuPG/PGP "keys".

undefined's picture
Submitted by undefined on

you don't even have to get into the technical details of RFC 2246 to know that TLS provides authentication and integrity checking.

from the abstract:
The protocol allows client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery.

from the introduction:
The primary goal of the TLS Protocol is to provide privacy and data integrity between two communicating applications.

continue reading the RFC to learn the details, but to summarize:

  • "TLS Record Protocol" provides the integrity checking by way of MACs (MD5 or SHA)
  • "TLS Handshake Protocol" provides authentication by way of public keys (RSA or DSS)

TLS even has a "null" cipher which should approximate OpenPGP signed cleartext. to see it in action:

  1. openssl s_server -cert server.pem -accept 65000 -cipher eNULL
  2. openssl s_client -connect localhost:65000 -cipher eNULL
  3. sudo wireshark

note: you'll want to instruct wireshark to decode the transport as SSL as port 65000 is not associated with SSL/TLS.

finding a "real world" implementation might be difficult (or at least configuring it) because i've never seen TLS used without encryption. the default OpenSSL cipher list excludes NULL. and most (all?) deployments exclude the NULL cipher because there have been downgrade attacks where a man-in-the-middle could force a weak cipher so the stream could be easily decrypted.

and the common usage of OpenPGP cleartext signatures is for a message going to an unknown recipient (or a recipient with an unknown key), usually for wide distribution (mailing lists, software downloads, etc). that use case is impossible for TLS because the communication is real-time (where the key can be learned and validated in real-time using certificate authorities) with a single endpoint.

overall OpenPGP and TLS use the same algorithms (AES, 3DES, MD5, SHA, RSA, DSS/DSA) for the same purpose (encryption, integrity, authentication). though there are slight protocol/implementation differences, i don't believe any are significant enough to garner Enigform more widespread adoption than TLS for client-side authentication.

(I want to be proven wrong by seeing widespread use of encryption and authentication, whether in the form of Enigform or TLS.)

pivo's picture

As you state earlier, I think the advantage of this approach is its granularity. My server would encrypt only the content that needs to be encrypted and the rest would be signed and thus at the same time unforgeable and cachable both on the client and any intermediate proxies. This is not currently possible as far as I know. Otherwise, please enlighten me.

undefined's picture
Submitted by undefined on

based on this article's description of enigform and a quick perusing of Busleiman's freshmeat article, i ascertain that engiform can encrypt a single request or response (in theory), where TLS always encrypts both (at least in practice and probably also in theory unless mid-stream renegotiation exists as in some other protocols).

here's a supporting quote from this article:
Enigform does not break HTTP in any way, it just adds a set of request headers.

and from the freshmeat article:
There was no more HTTP POST body tampering, just a set of HTTP headers added to each request: The signature itself, a GnuPG version string, and a Digest Algorithm string.

i didn't see enigform encryption elaborated on, but i presume it works in a similar fashion to signing: encryption metadata in headers with the body encrypted, possibly as a new content type (requiring a new Content-Type header).

technically, TLS encrypts per TCP connection, not HTTP request/reply (as it operates on the transport layer, not the application layer), but you can send all encrypted requests/replies over one TCP connection, and unencrypted requests/replies over another TCP connection (eg web page contents over https, but generic images over http). your password could be submitted over an encrypted TCP connection where the server's response would redirect you to an unencrypted web site. (logging in with a password wouldn't even be necessary with client-side certs, but you get the idea.)

enigform can conceivably encrypt only half of a request/reply pair (user sends password in encrypted request while server responds with unencrypted reply), where with TLS both request & reply are either encrypted or unencrypted as the reply must be sent over the same TCP connection as the request. so there's a little bit more flexibility with enigform.

currently web browsers notify the user about web pages with mixed encrypted/unencrypted content (TLS encrypted & unencrypted TCP connections on same page), but there's no technical reason that warning can't be removed if mixed content becomes the accepted practice (because what i want to be warned about is unencrypted data submitted from an encrypted page, as a user would general assume an encrypted page would submit encrypted data).

i wonder what variety of attacks are possible against enigform as it appears enigform only safeguards HTTP contents, but not headers (at least in the example of signatures in the freshmeat artcle, but i suppose encryption works the same). can cookies not be authenticated or encrypted using enigform? and are urls visible, compared to TLS where the entire connection (headers & body) is encrypted, protecting even the url requested (ie information disclosure)?

again, please don't interpret my critique as disapproval of enigform, except in the fact that enigform is so similar to TLS from a practical standpoint that i see no reason why it should succeed where TLS has failed (and failed not for technical reasons necessarily, but maybe because everybody thinks passwords are sufficient for client-side auth).

pivo's picture

I mostly agree with you.

But currently no http scheme that allows a server to sign the content and make it cachable by proxies. If done right, that'd be a killer feature. As you also note, some of the headers would have to be covered by the signature as well.

The creator of enigmail has this to say about the issue (from freshmeat):
"As you saw, I decided to focus on Identity and Data authentication of client at server, but of course I'm also thinking about the same scheme in reverse."

And the creators of coralcdn also saw a need for server signatures, as on I found this: "To handle
this issue, we've written an apache module that servers can use to sign content
to ensure content integrity and freshness, to be verified both and proxies and
possibly by client browser extensions."

But so far I haven't got any response to my request for clarification on their mailing list:

undefined's picture
Submitted by undefined on

i totally missed that in your first post. yes, server-signed content would be cachable (in theory; i don't know how caching proxies, like squid, handle headers and if they are maintained with the associated cached content).

my guess is that currently headers are not signed nor encrypted because of the difficulty in implementing it in a standards-compatible way (ie assuming the signature headers apply to all headers that follow them, can it be guaranteed that a proxy server won't add any of its own headers below the signature headers or reorder the headers, ruining the signature).

and i can see the benefit for public web servers that want to serve signed content (linux distros: security bulletins, installation instructions, etc) but also need it to be cached for web site performance.

but when i want encryption, i want everything encrypted (no information disclosure even through headers, especially the request header which would be difficult to encrypt and not break standards).

so i can see how enigform would excel at authentication (or minimal encryption use like only submitting a password on a login form) while ssl would be reserved for encryption (banking web site).

Author information

Tony Mobily's picture


Tony is the founder and the Editor In Chief of Free Software Magazine