lördag 20 oktober 2012

Retina support in CSS4

Retina displays are appearing in more and more devices and web developers really need a flexible solution for supporting both retina and non-retina devices in an efficient way.

Luckily additions to CSS4 propose a solution.

Before you question why you would consider CSS4 when working with the current browsers note that support for features sometimes appears quickly when it is really needed. This is such a case...

As blogged by Jason Grigsby here there is support in Safari 6 and Chrome version 21 (the most widely used version since late august 2012) for specifying a set of images when defining background images in CSS4.

#test {
background-image: url(assets/no-image-set.png);
background-image: -webkit-image-set(url(assets/test.png) 1x, url(assets/test-hires.png) 2x);
width:200px;
height:75px;
}

Edited example from James Grigsby's blog.

Various solutions based on JavaScript or dynamically generating device specific html are around. But they all share the same problem, you need to solve a presentational problem with code lacking information of basic stuff such as user preference, available bandwidth etc. With this solution you leave move the problem of selecting which image to load to the browser that has a better chance to make an informed choice.

Browser compatibility is not great yet but currently most retina devices are built by apple. A high portion of those users are likely use Safari 6 or Chrome which solves the problem as long as you remember to use the standard background-image for backwards compatibility everybody else.

söndag 17 juni 2012

Browser preloading

A classical optimization on a web site is to configure cache headers of a page to enable the browser to display the page instantly if it has been loaded recently. This works very well when the user is hitting the back button to go back to the previous page.

What if we could do the same for the next page that the user will request? This is possible if we have two component:
  1. We need to guess which page is going to be requested.
  2. We need to tell the browser to preload it.
Number one can be addressed by gathering statistics of which pages are browsed on your site.

Number two is solved by adding a specific link tag that is so far supported by FireFox and Chrome, although implemented in slightly different ways.

The html link
<link rel="prefetch prerender" href="url-to-preload">

prefetch is used by FireFox. My testing indicate that the response to FireFox needs to have correct cache headers otherwise it will be requested again when the user requests this page. You need to look at the web server logs to see the request, FireBug seems to disable the prefetching.

prerender is used by Chrome. My testing indicate that regardless of cache headers the next page load is instant if the user requests this page. The prerendering is displayed as a cancelled get request (screenshot below).

I'm working on a wordpress plugin that will gather usage statistics and generate preloading instructions to the browser.

torsdag 8 mars 2012

One sprite to rule them all?

It is widely known that sprites are a nice way to combine several images into one to make the web browser load your web page quicker. But how far can it be taken without negative side effects?



In the picture above there is one big image 300+ KB that contains all the images for an entire site theme. As you can see the browser correctly starts loading this image early. But it also continues starting load additional images. In the end the big sprite is the last to finish, the visual impact is a broken site that loads several images and at the end finally adds all the visually important bits and pieces of the theme.

Clearly a case when the concept was taken too far.

For site themes that have a lot of shadows and large theme graphics it is wise to split this load into multiple sprites. To avoid big visual impact consider moving all small and colourful minor graphics to one small sprite that can load quickly because waiting for these items is much more anoying than waiting for a background image of some type.

Remember to set far future cache headers on the sprites and your site will be lightning fast once the user has got the sprite once.

söndag 15 maj 2011

Rough counters on Google App Engine

On Google App Engine the maximum frequency of updates for a single database entity is limited. If you need to count something with high frequency such as page views on a web site or similar you need to do some magic to manage it.

The solution found in documents by Google is to use sharded counters. The principle is that you have several database entities, pick one at random and update that entity. The number of updates possible scales with the number of entities you use. However to get the value of the counter you need to get all entities and sum them. Also the code to do this is a bit complex. You pay for that complexity with cpu-resources.

If you need a counter that allows a high update frequency but don't want the fuzz of sharded counters and it doesn't matter if you occasionally miss a count or two there is a better solution. For some of my applications I instead use rough counters. The basic principle is that you let your instance cache the counts locally and occasionally write the counter to the database. You'll waste far less resources, you can read the value of the counter directly.

Note: If your instance shuts down you loose the current counter value, the error whill likely be less than a tenth of a percent, but it isn't exact.

The code:

import javax.jdo.JDOObjectNotFoundException;
import javax.jdo.PersistenceManager;
import javax.jdo.annotations.IdGeneratorStrategy;
import javax.jdo.annotations.NotPersistent;
import javax.jdo.annotations.PersistenceCapable;
import javax.jdo.annotations.Persistent;
import javax.jdo.annotations.PrimaryKey;

@PersistenceCapable(detachable="true")
public class RoughCounter {
@PrimaryKey
@Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private String mName;

@Persistent
private long mRoughCount;

@NotPersistent
private long mLastUpdate;

public RoughCounter(String name) {
mName = name;
mRoughCount = 0;
mLastUpdate = 0;
}

public void increase() {
mRoughCount += 1;
if (mLastUpdate > System.currentTimeMillis() - 10 * 1000) return;
mLastUpdate = System.currentTimeMillis();

long count = mRoughCount;
mRoughCount -= count;

PersistenceManager pm = PMF.get().getPersistenceManager();
try {
RoughCounter rc = pm.getObjectById(RoughCounter.class, mName);
rc.mRoughCount += count;
pm.makePersistent(rc);
} catch (JDOObjectNotFoundException ex) {
RoughCounter c = new RoughCounter(mName);
c.mRoughCount += count;
pm.makePersistent(c);
} finally {
pm.close();
}
}
}


This counter writes the results to the database around once every ten seconds. The counting can occur at ANY speed, this is not depending on database performance, only on execution speed which is extremely high in comparison to database update performance.

tisdag 19 april 2011

Key value store on Google App Engine

Just created a quick hack to use Google App Engine (GAE) as a simple key value store with a nice rest api. This allows usage as a data store for web applications and also a flexible way to serve web sites. It can also be used as an application server with similar thinking to CouchApp.

Have a look at:

torsdag 20 januari 2011

Sums in Microsoft Excel

I often use MS Excel (2007) to evaluate data... simple and convenient for many cases...

Unless you try to sum several integer values and they exceed 1'000'000'000'000'000. By then Excel starts losing precision. I'd expect 64bit integer math to work correctly, but it doesn't!

tisdag 23 november 2010

DropBox distributed computing

Nothing beats a free lunch. One tool that is getting more and more important to me is DropBox. Great tool to use for storing and sharing your files across many desktops and project members.

Anything else... yes why not use it as a distributed computer? Because after all we all know that if you can leave GUI-stuff and infrastructure out of it many things are really simple.

What I did:

Create one folder DistributedComputer
This will be shared with everybody that runs a node. Share this with all persons trusted to run a computing node for you.

In it place all the stuff you need to pick a job start an executable that does the processing and then write the stuff to a directory.

I used one large text-file to contain all the parts of the problem and two directories for tracking: Started and Done. A simple program picks a random problem (important to make it random) and starts another program that then starts the calculation, a file is created in the Started folder with the problem id as name. When the calculation is finished a file is created in the Done folder with the problem id as name and the output as content.

Super simple and it works!

Some notes:
Picking a random problem is important. If a node loses network connection it can still keep work on randomly selected problems with small risk of duplicating work. If picking the problems in sequence all nodes without network connection would duplicate their work.

The program that picks the problem and starts the program to solve the program should of course verify that the problem isn't already started (known by checking the Started directory) or done. If there are no problems left that aren't started pick a problem that at least isn't done already.

The real beauty of it:
  • Coding some simple commandline tools that works with files is really simple. It is also much less work to make them platform independent if needed.
  • DropBox will handle tracking for you. Can see which computer created the file and when a problem started and was done.
  • You can easily continue with refinements and optimization of your program to solve the problem, each time a new problem is picked the executable is reloaded, all you need to do is put a new executable in the shared folder.
  • Zero code for communication and synchronization, only two directories to check for status.
  • All members can easily follow the project and participate. This is really nice since it makes it easy for everybody to follow the project and contribute with improvements.

Issues:
  • Size limits, 2GB isn't huge but it should cover a decent set of problems. For the results, if they are big just have two files in the done folder one to mark the problem done and one with the actual results. Then you can gather all the results and process them as they appear and then delete them.
  • Since the files are shared there is potential for users grab the komplete results.
  • There is also a risk for users to wreck your files and data, not very nice in a internet project with thousands of users, but quite ok if you are doing this with some friends.