Brad Templeton Home

Brad Ideas
(My Blog)




Jokes / RHF

Photo Pages

Panoramic Photos

SF Publishing


Articles & Essays




The Book




RHF Home

Ty Templeton Home

Stig's Inferno

Copyright Myths

Emily Postnews


World Heritage Sites

Burning Man

Phone Booth

Alice Pascal

The Rules for Guys

Bill Gates

Contact Me

Building a secure web protocol

Building a secure web protocol

Today almost all web surfing is done entirely unprotected by any encryption. Anybody on the same wireless LAN as you can see your web surfing, as can anybody who taps internet connections anywhere along the route. While the "https" protocol exists (which uses the system known as TLS, formerly SSL) to offer both encrypted web pages and authentication of who you're really talking to, TLS traffic accounts for a tiny fraction of all web traffic.

This is not how it should be. Broadcasting all you do on the web, including in many cases passwords and in almost all cases login cookies, is highly insecure and doesn't need to be. Doing everything "in the clear" for all to see should be the rare exception, not the rule. Today HTTPS is relegated mostly to things like banking sessions, and (misleadingly) to login screens.

There are many reasons for this failure. Here are some plans to mitigate them and get wide deployment.

Key Issues and Recommendations:

  1. Work must be done to make TLS or another secure protocol more lightweight, with fewer roundtrips before web GET is done.
  2. Zero User Interface provisioning of certificates for new domains and IPs on web servers.
  3. Support for use of such automatic certificates by browsers without warnings in normal situations, but with a neutral display of the security level.
  4. Configuration by sites about what warnings should be sounded regarding security risk situations regarding their web site.
  5. General widespread efforts to extremely minimize false warnings reducing the amount users wish to disable them.
  6. Capacity for security warnings on unplanned security downgrades on sites and pages, and the use of existing passwords on unsecured pages.
  7. Support for secure cookies not accessible by lower levels of security.

The ultimate goal: A web where most traffic is sent encrypted, with decent but not perfect trust that the traffic is secure, particularly after a long history of interaction with a site. In addition, browsers that by default only warn when it is highly probable that an attack is going on.

Technical Issues

TLS turns out to be expensive, certainly compared to the ordinary web. The cost of doing encryption is no longer expensive thanks to faster computers, but it was high when the web was built, so people got the idea that TLS should be rare. Unfortunately, TLS is still expensive in terms of the "round trip" dialogs that must take place to start it up. This means that users might see a slight delay on their first hit to a web page in a while. That's OK here and there but not pleasant if the whole web is like this all the time. It also turns out to be really burdensome on wireless networks with intermittent packet loss.

Reducing TLS's roundtrip need requires major technical changes, and in some cases "layer violations" which disturb engineers, sometimes for good reason. However, some plans worth considering include:

  • Clients and servers can cache the session encryption keys from past TLS sessions with a user for a reasonable period, like a day. This would mean that whatever cost the first TLS session had, it is not repeated for a day. A client should be able to open the TLS port and begin their conversation with "I will be using key #12345 from earlier today" and proceed immediately with their encrypted web GET or other traffic.
  • Clients should be be able to receive certificates for sites in DNS requests, and use them for a shortened encryption setup to the site. If the client sees sufficient encryption based on the certificate, there will be no negotiation. The session would start with "I am going to use your public key X, and here's a session key encrypted with it, and here's my GET encrypted with that."
  • To facilitate the above, the use of eliptic curve systems with vastly shorter keys should be considered, and the use of non-X.509 certificates should be considered because they are huge and bulky.

User Interface Issues

The key to success in encryption system is something I call ZUI -- Zero User Interface. The encryption must "just work" in as invisible a method as possible. The only real success stories for encryption, Skype and SSH, work almost transparently to the user, and in SSH's case, even to the site administrator!

Sometimes making a usable system involves some security compromises, which bother many security designers. They are so afraid of any compromise that the perfect becomes the enemy of the good. The attempt to build perfect systems has left us with a world where encryption is rarely used because the UI gets in the way. We must accept that a system with a good UI but some minimal compromises will provide more security to the average use than a hard to use and thus rarely used no-compromise system.

To modify a famous aphorism, "Everything should be made as secure as can be done usably, but no more secure."

Some of these security compromises are discussed in the security section.

ZUI web server install

It must be possible for a web server system, when it is being installed, to do everything to set up encrypted operation with zero action by even the sysadmin. We often feel little worry about putting configuration work on sysadmins of web sites, as they are supposedly wizard users, but the reality is that few sites have TLS as a result.

It must be possible for the site, without any action by the sysadmin, to generate an appropriate private key for itself, and an appropriate certificate for that key. That may be a self-signed certificate but it is better if it can be authenticated slightly more.

If the configuration of the web server changes (ie. new domains or IPs are added) it must be possible for the system to automatically adapt. Indeed, it may not do much of its key work until domains are defined.

Free, machine generatable certificates

To avoid the issues of self-signed certificates, which provide only a basic level (though much better than zero) of security, it should be possible for any machine connected to the internet to get some level of certificate automatically, and at zero cost.

Certificates today require a manual process and cost money. This is because of monopolistic practices, and because we attempt to have all certificates certify too much. It is not bad to be able to get and pay for certificates that certify more (such as the real names of companies) but this should not be the minimum. There should be a minimum level of certification which is designed so that it can be free and automatic. In this case, any operating web server can request a certificate for its key with no human involved.

Things that can be certified in this way include one-time (or long term) control of the web server at a given IP address and/or a given domain. With additional tools, and some minimal UI, root control of a given IP at a given time can also be proven.

Meaningful security states and minimal warnings

If a security system presents false or uninteresting warnings to the user with any noticeable frequency, the user will turn off the warnings or stop using the system. While it is essential that the system be able to warn the user about security issues, it is also essential these warnings be used only for real issues. Warnings on "maybe" issues must be minimized, and represent poor design.

Today, browsers display TLS security with an icon, typically a closed lock, when a page is secured. They may show nothing, or an open lock, when a page is wide open. Furthermore, the browser may show certified data (such as the certified real-world name of the entity you are talking to) in the title bar. The use of special colours may also convey this status, and of course the presence of https in the URL indicates it.

There are many states that can be shown a user:

A well certified and identified site.

Here, the display of the certified name in the URL or title bar makes sense.

An automatically certified site.

Eventually this should become the most common condition and as such it might well get no special display. For example, it would have neither a locked padlock nor an unlocked one.

A self-signed site.

This should be another common state, and the display of it, if any, should be subtle. However, use of a self-signed certificate should only be cause for a slight UI change, and not a warning, unless the user has asked for this. The self-signed key should be added to a key ring without user intervention in normal circumstances, to detect changes in the key or downgrades in security.

An unsecured site.

Eventually this should become the exception, and merit a more prominent unlocked padlock or mild visual negative clue, such as the use of the red colour in the toolbar.


No common situation should generate a warning unless it is actually an attack. That means there should not, by default, be warnings for the ordinary use of a self-signed or automatically certified site. It also means there can't yet be a warning about the use of an unsecured site, though that is actually the thing most in need of caution, at least if the site performs any authentication through passwords or cookies.

In general, for every warning considered, it should be studied if the condition might be common, or might represent a situation that is a deliberate choice by a site. This should affect whether disabling the warning is global, or just for a site. Of course, if the thing warned about is certainly an attack, it should be strongly discouraged to disable it, but history shows judgement has been bad on just how certain we are that something is an attack. It may seem that a condition is very rare and worthy of a warning, but this can still mean that 99.9% of the times the warning is given it is false, making the warning UN-useful.

It may often be the case that changes are more appropriate for red flags than ordinary static conditions. Even then it must be considered what could legitimately cause the changes.

Before a browser generates a warning about a site, it should attempt a special fetch (probably a magic URL) from the site to receive a file encoding certified security updates, including statements about what warnings are desired and not desired, and certified declarations about changes in certificates and security levels.

Notice of changes to these security policies could be provided in HTTP headers of pages returned by the web server. This header would indicate the date of last change to the security policy page so that it can be fetched if the cached copy is not present. This avoids having to put security policy (which could be large) in each web return. Some security policy may have to go into every fetch, such as that designed to prevent cross site scripting. The Mozilla CSP proposal is aimed at this, and the two approaches can be mixed, including providing the URL of the warning policy in the CSP. This paper also proposes a unified security policy mechanism for sites and browsers.

Unexpected security changes and downgrades are more worrisome than planned low security

If a user goes to a site which has decided to do a self-signed certificate, this is should not be cause for alarm, not when having no certificate is no cause for alarm. However, if a site one day presents a full certificate and the next day it presents a self-signed one, that is much more suspicious. If a site has a certificate one day and the next day is completely unsecured, that's a much greater potential concern. It is not the specific behaviour which is the issue, it's the change, particularly the change to lower security which is just what an attacker might want.

The main barrier to this sort of warning today is that many sites, when you go to them via https will then immediately redirect you to their http version. Sites that want you to use http start you at that page and redirect you up to the https version. And almost no sites have an https version with no http version. It is thus not yet possible to do a meaningful warning by default about this sort of downgrade, at least on main entry pages.

It might be nice if browsers treated "http:" as meaning "Get me a secure channel if you can, but I will also accept insecure on a site that has never given me a certificate before." In this case sites that redirect older browsers from http to https would not be seen as changing their security level back down on the next visit. This could be intuited from the typical behaviour if all that does is immediately redirect to However, sites on the web always link to http pages, almost never to https pages.

All certificates should contain an age, "how long his this identity been certified to this keyholder." A reduction in the age can be treated as a downgrade. For example, if I hold a certificate that says, "I have had this IP for 3 years" and suddenly you see a certificate that says, "I have had this IP for 1 hour" it can be a sign of potential trouble. If the 3-year-age certificate requested warnings on such downgrades, they could come.

It is worth noting that many of the vulnerabilities that have been found in TLS, such as MD5 weakness at some CAs, null characters in certificates and poor checking of issued certificates by CAs would be detected by checks for unplanned changes in security.

Sites should control what generates warnings

Designers of the system have a hard time predicting what situations will be truly unusual and worthy of warning. They should be able, while in a "higher" security mode, advise the browser on what to warn about and what not to warn about over security downgrades.

For example, a secure page may contain embedded items like graphics which are not secured, or secured at a lower level. It may contain links to unsecured pages. In some browsers this generates warnings which today almost always turned off immediately by the user.

As described above, each server should maintain a page with security policy information, including what warnings are desired or not desired. Such statements are always advisory -- a browser could decide to modify or ignore the advice. Users can also set their own policy in browsers to supersede browser defaults and site advice. Browsers don't have to download the file unless they see something they might generate a warning about.

For example, the security policy might say: "We know the embedded GIFs on this page are unsecured and it was planned this way" and the browser could avoid the warning. On the other hand, if they do something insecure by accident, the browser might offer a warning. (Generally, this sort of browser behaviour should be deployed for a while before it is turned on so that web sites can all test themselves on the new browser. Otherwise a flurry of warnings will come if sites are not ready, making everybody ignore them.)

Ideal policy would be for a site to list all deliberate security downgrades they implement, and ask for warnings about the others. A perfectly behaved site would ask for all warnings.

Most important, however, would be signed and certified declarations about security changes, in particular changes of certificates or downgrades in security. When there is a planned change in security levels or other change to anything but a higher security level, a statement in the policy file, signed by the old key, would make it clear that you are still talking to the old site. In the event a key was compromised, a statement revoking it, signed by an appropriate CA would be necessary.

It's also possible to allow revocation using the compromised key, though that opens up a DOS attack. An attacker who can fool a higher level CA to issue a revocation can also attack you -- this is an expected weakness in CA based systems.

With such a system, sites that do nothing would generate few warnings, ideally only warnings when it is highly likely something odd is going on. Sites that decide to improve their security procedures, however, could store the new security policy, and any use of their site would be scrutinized more closely with more warnings issued about troubles. Users might not be able to turn off warnings that are deliberately asked for by the site, or at least would find it harder to do so, or find they can only turn it off for one domain but not implicitly for others, because that domain is setting off too many false alarms.

Note that certificate changes using the same private/public key do not constitute an unusual circumstance, and don't need a stronger certificate to declare them.

Consider: A site has been wiped and rebuilt. Old keys are lost. Browsers visiting the site will detect its new key and certificate do not match the old one from the last visit 2 years ago. They wish to warn the user, but they first fetch the warning policy. At the warning policy is a statement signed by a trusted CA with a high level of verification indicating the new key and certificate are trusted for the domain, IP or company, and no warning need be issued.

Chances are this "I changed my key" certificate is not a free one, and is only granted when the CA verifies the identity of the party asking for it.

All rules in the file must be signed with some trusted key, of course, since the security file will not be the real one of a domain or IP is compromised.

Changes of CA

One of the most troublesome situations concerns a change in the CA, accompanied by a change in key. Current browsers accept this gladly if the new CA is also trusted, but it has been the source of many vulnerabilities, in part because it means your whole system is as secure as its weakest CA. For example, some CAs are actually run by foreign governments, and all CAs are subject to being forced by their government to issue false certificates. Real changes of CA happen, but are rare. Soghoian and Stamm recommend, for example that a change in the country of the CA is more worthy of concern than a switch of CAs within a country.

A change of CA is needed when a CA has been compromised, or one no longer likes doing business with it. When a change of CA is done, a site's warning policy could indicate whether to not warn about it -- such rule of course signed with the old key as well as the new key and the new CA. An attacker must compromise both the old key (not just the old CA) to disable this warning.

A change to a CA in a different country might generate a warning no matter what, though that would make people quite wary to do it even when it makes sense.

Note on use of remembered keys

Remembering keys, and not just certificates presents a problem with sites that are really a network of different machines which are all certified but use their own unique keys. This can be addressed either by sharing keys among the machines, or assuring that one master key is always seen at some point or another (through an embedded image for example) so that this master key is assured to be cached and usable for signing security policy changes.

API for easy non-automatic certificates

While all sites should have at least automatic certificates, the process of getting simple somewhat strong certificates can be made easier. In particular, domain registrars should offer certificates on the domains they sell -- ideally included for free with the price of the domain.

To assist with this, an API should be defined so that web servers and other tools can fetch, and even help generate the certificate for the domain. A web server might, when getting ready to serve a domain, do a DNS/whois query to find out who the domain registrar is, and then query that registrar to fetch the certificate, or even help create it.

Fetching a pre-created certificate can be done with ZUI. (Even putting that certificate into DNS records can be done with ZUI.)

Creating the certificate involves authenticating to the domain registrar using whatever system it uses to control access to the domain and its data. This usually is a logged-in web session with the registrar. During this session, one would have to send your site's public key to the registrar for signing, which means one must have a way of easily getting your public key during a web session.

It's also possible to build a tool to run on the server where the userid and password for the domain at the registrar's site are entered into the tool, and it handles sending the public key to the registrar and getting a certificate for it.

A common attack against a site is to try to change its DNS registration, so such a system must be designed that it is not easy for an attacker who later gets the password to both change the DNS parameters and get a new certificate to use with them -- in fact the domain certificate should be a defence against hijack of the DNS records. As such, the need for a new domain certificate for a new public key should not be automated. Only the first fetch and renewals of that certificate should be automated. Such certificates should also show the time of first certification, and a decrease in age may be cause for alert.

Because it should be easy to switch domain registrars, it will be necessary to deliver a signed authorization of the switch of certificates. This is something that the old registrar must do even though they are losing the business, just as they are required by ICANN to ease the transition in domain switches. ICANN should require a similar mandate here.

Security Implications

Man in the Middle on First Use

Some ZUI security means security that is less strong than it could be if perfect. Some opportunistic systems, like SSH, are subject to the famous "man in the middle" (MITM) attack the first time they are used with a site. If they are not attacked the very first time, and continuously from then on, they are secure.

Some debate how bad this is. In the past, there have been few documented cases of real MITMs, and it was generally believed you should worry about that only if you are being spied on by the NSA. Today, the tools are more readily available, and so fear should be higher. You might want to not consider doing your first interaction with a site on an open wireless network, but you can happily connect on an open wireless network if you got your introduction elsewhere.

It is important to note that the first-use MITM problem is immediately discovered if the MITM is no longer able to control your traffic. So if somebody does an MITM on you on your first visit to a web site in an internet cafe, the moment you leave the cafe, you will find out it happened. Likewise if they tap your home internet line, you will find out the first time you go to a cafe, at least with a laptop. Finding out may be cold solace, but it does limit the number of people to whom such attacks are useful. (For example, they are not practical for long term spying on a laptop user.)

The use of purely self-signed certificates (or general unsigned keys -- self signed certificates are a bulky kludge if the problem is just remembering keys) is at risk to MITM on first use. However, as a means to simply make a web session secure against ordinary listeners it is quite good, as long as people don't expect it to be more than that. The main fear people have is that users may be fooled into thinking they are safe, when in fact they might be compromised.

If, however, we take the approach that opportunistic encryption of this form has ZUI and is something the average user is not even aware of then there is no negative in doing it for that user, and in fact few negatives about making it the standard behaviour.

Automatic certificates

MITM becomes less of a worry if there is a good system of automatic certificates which can be generated without human intervention. What is key is understanding what they certify.

IP Address certificate

A site can ask a CA to certify that it controls an IP address at the present time. This is easy to do, as the CA can assure that the requested certificate goes to the specified IP address. To compromise this, the attacker must compromise the path between the host being certified and the CA. CAs are expected to maintain high levels of security to prevent and detect that if the compromise is near them. Compromises near the host being certified are easy (especially on the same ethernet) and should be expected.

An IP address certificate only certifies access at the time of creation. However, they can be made quite short lived, as it is trivial to get a new one, even every hour -- even every minute. However, normally they would not.

An IP address certificate eliminates the need for self-signed, opportunistic encryption methods. Using one you can be certain you are talking securely. You know close to nothing about who you are talking to, as with a self-signed certificate. An IP address certificate on the host you are visiting protects you from attacks on you more than attacks on that host. Of course, hosts that have secure LANs and secure paths to the CA can protect their IP address certificates better, but there is no point as the certificate is not certifying their identity. It cannot be MITMed by somebody near you, such as somebody on the wireless LAN with you.

Domain & Web Server certificate

A new web server can readily request a certificate proving, at least at that moment, control of a domain and the web server at it. The web server can make a request of the automatic CA, which will attempt to fetch a magic page from the web server, using secured DNS. The magic page will contain the public key to sign and the certificate can then be generated and made public.

This certifies the following: You are talking to the same web server the CA spoke to when it made the web fetch. Your path to that server is as secure as the CAs. This does not mean there was no security compromise at the site, just that it applies to both of you. It's even possible there was a security compromise at the CA, but it means that you are also compromised in the same way, which is a much higher level of attack.

This level of security is actually the most common kind on the web, where a certifier sends a magic E-mail to a site, and the ability to read that E-mail is used as proof of association with the domain or email.

Long-duration automatic certificates

Once it is possible to receive an automatic certificate for an IP address or domain, it is possible to demonstrate this ability over a longer period of time. For example, if you get a certificate every week for a static IP you own, the CA can look at the history they have (or a chain of certificates you provide in the request) and then certify, "This has been true regularly since a given date."

This adds a fair bit of security, as an attacker must not only compromise the domain or IP to get a certificate, but must do it over a long period of time. This is both harder to do and more likely to be noticed.

As such it may make sense to give a low level of trust to a brand-new automatic certificate, but to trust it more as it accumulates a history.

Multiple automatic CAs

There can and should be several automatic CAs. Not only can sites choose among them, or use fallback units when one is offline, but it may also make sense to do all automatic certification through several automatic CAs, scattered around the world. Such an automatic certificate is highly unlikely to be compromised by the CAs being compromised. If it is compromised it is because the certified host is compromised or a part of its network quite close to it is compromised.

Easy not-entirely-automatic certificates

We could also consider certificates that are not fully automatic but more secure, and yet still cheap enough to be free. For example, if there is any other existing authentication system that we can trust, and we can piggyback on top of it, that may be enough. One obvious choice is DNS. Domain registrars have a way of securing that you are authorized to make changes to a domain. There is a move to add more security to DNS. If this is in place, with an API to detect it, then a certificate can be generated based on this which can't be compromised by attacking the LAN of the site.

There are also a number of out of band channels available for free or very little money, such as SMS, non-VOIP phones and the like. However, use of such channels means some UI, and if it is not done carefully the deployment will drop rapidly. Nothing will compare to the 100% deployment possible with fully automatic certificates, but this does not mean that making higher level certificates easier can't help. Of course, if they are not as secure as the top level certificates they can't be trusted like them.

What to do if you lose your key?

If a system with an automatic certificate loses their key, as will happen from time to time due to disk crashes, compromises and even system reinstalls, this will cause a downgrade which might trigger warnings.

This is OK because this is an exceptional situation which does not require ZUI. A site in this situation should then do extra work, and possibly spend money, to get a more verified certificate from a CA. This would then show a security upgrade which would not present a warning. After the upgrade, they could downgrade by including a statement signed by their new high-level key approving the downgrade. This way they need not keep paying fees every year to the CA.

Default operations should be ZUI and free. Special case operations, including rebuild of machines from scratch, loss or compromises of keys, and changing of CAs and and should have a UI and probably will have a cost, if they are to be properly verified.

Summary of certificate styles

  • A self signed certificate says, "You are talking to the same party you talked to before while using this certificate."
  • An IP address certificate says, "You are talking to a system which other parties see as being at this IP address, perhaps for a long duration."
  • A domain certificate says, "You are talking to a server which is at the specified domain from the standpoint of you and various CAs, and possibly has done so for some time."
  • An identity certificate says, "You are talking to a server managed by a party who used the specified out of band techniques to verify their true identity with the certifier."

In all cases, the phrase should actually be taken to mean that you are talking to a holder of a private key that was used in the certification of the specified attribute, and you may be talking to another party if that key has been compromised.


Cookies should understand security. A cookie created at a high security page may not be good to deliver to a lower security page, and certainly not settable by a low security page. A good policy around cookies should be implementable and enforced through the defaults, though it is OK if a secured page decides to transmit a cookie with explicit flags saying it can be read by or altered by lower security pages.

Cookies are a big security hole right now, particularly because many sites use them as a way of detecting who a user is, and that they have logged in. While the password entry for login may have been done over TLS, the cookie is then set and used in non-secured pages, so anybody on a wireless network can pretend to be the user, very easily if it's a "loose" login cookie that persists even when a laptop user moves from network to network, as many are.


The use of a secure web blocks the use of local web proxies found on LANs or ISP networks. Some ISPs and companies insist that all web traffic from a site go through a proxy. That proxy may exist for security reasons. It may exist to filter web traffic and block forbidden sites. It may exist to optimize traffic and cache frequently fetched web sites to speed up the net and make it more efficient. It may modify traffic, resizing and shrinking images for small devices. It may throttle traffic. It may spy on people. It may extract a userid and password for access to the network.

There are many uses of proxies, both good and bad. All are blocked by a secure web, which after all has the primary goal of barring 3rd parties from looking at or interfering with communications.

As a result, some proxy managers may decide they will simply block secure web traffic, and thus force people to use their proxy. While they can do this, if it becomes common the principles of a protected web are lost, and people get used to an unprotected one again.

Since we won't be able to stop people from wanting to proxy, instead we can lay down the following guidelines:

  1. Traffic from clients to the proxy should, of course, be encrypted.
  2. Traffic from the proxy to sites should also be encrypted. In the rare cases where there are two levels of proxy, each link should be encrypted.
  3. The proxy should not be hidden from the site. For example, a bank might well refuse to allow you to transfer large sums out of your bank account when going through a proxy. It might ask for independent confirmation in such circumstances (sending you an SMS for example.)
  4. The proxy should not be hidden from the user. A warning about the existence of a proxy should only be disabled for that particular proxy or class of proxies, not for all proxies that demand to listen in on your traffic. Security icons should be downgraded unless the user asks that they be restored because the user trusts the proxy greatly -- for example, it is the user's own proxy.

Today, many proxies rely on the fact that 99% of the web is in the clear, and so they will allow TLS/https to go through unproxied because it is rare. There is an irony that if we move to have 99% of the web protected, proxies and firewalls will now feel they must stop it. This is probably still a win, but it's debatable.

Path of adoption

The move to a new security regimen is complex, and must not in itself create large numbers of warnings. The goal is a web were it is quite unusual for traffic to be sent unencrypted, so unusual that this might be eventually worthy of a warning. There are a number of intermediate steps.

Once the steps are mapped out, browsers should offer a configuration box that allows various settings for points along the plan. Thus you could say in your browser, "Act according to the 2012 plan" or "Act according to the final best practices plan." While most users would not use these modes, web site authors would be strongly advised to use and test their web sites in these modes, to see what warnings do and don't come up. These modes would, in an explicit testing mode, reveal warnings you might want to signal but which much be turned on by the site.

Initially, most of the steps which present no changes for users and sites can be implemented right away. These steps would include security procedures and warnings that only active when turned on explicitly by the site being visited, or by explicit modes in the browsers.


Browsers should implement anti-phish techniques in their chrome. In particular there must be a secure way to enter passwords which appears in the chrome of the browser and can't be faked or intercepted by sandboxed applications or javascript.

Ideally it will still be possible to put password boxes in web forms as laid out by the the web designer, but their proper use would also trigger unmistakable signals in the chrome.

If the browser keeps a vault of user passwords (or simply their hashes) it should detect any time a password from the vault is re-used on a different site, or the same site at a lower security level. Before transmitting this password to the new site, the browser should warn the user that this is happening, and tell them to be alert for phishing. However, this warning will also include the option to not be warned again about this password on this site, or on all sites. The latter would indicate this is the sort of password the user shares over many sites, typically sites without much security consequence that are still demanding a login and password.

A user may also, in the password manager, be able to designate some passwords as high security, so the browser can notice they are being typed anywhere other than the site they are intended for. The browser can even notice if the user starts typing such passwords to keystroke grabbers or non-password web form boxes. In effect the meaning is "Never let me send this string to any web site other than the one it belongs to."

Of course an even better anti-phish technology is a user-based signing key, though this has roaming issues. However, a move to digital signature based authentication would do much to prevent phishing. The use of authenticated actions" as an alternative to login can also reduce this and enhance privacy by breaking the "every site must have a login" cycle.


Here are some suggested risky states that may be worth detecting. Whether they are conveyed to the user and how will be dependent on headers sent by the site and browser policy, as well as stage of adoption.

  1. A page which once was presented secure is now presented less secure. This is currently common due to downgrade from https to http. If the page declares it never appears at lower security, this should cause a warning.
  2. A page which was secure is presented through a different certificate, and the old key (remembered from old certificate) is not used to confirm the use of the new certificate. Today change of certificate is rare but there is no way to prepare for it, and it is not warned about.
  3. A page with a password box is presented in-the-clear, while on an untrusted network. Today this is quite common.
  4. If a password form is submitted to an in-the-clear URL, or a GET URL. Today this is quite frequent.