Feature requests for a vocabulary editor

I have been searching for quite a while now and apparently there is a missing piece of software waiting to be made. If you are working with RDF data in any way you have probably created a vocabulary using OWL and/or RDF schema sometime. This works well for all technologists out there but in my world vocabularies should be created by domain experts rather than developers. Domain experts do not know OWL or RDF schema. Continue reading

Standards require reference implementations!

First, some people bash Microsoft for not implementing DIS 29500 (OOXML) in Office 2007. Then, someone discovers that OpenOffice 2.4 does not create proper ODF. (Update: The test procedure was wrong). And then, Microsoft announce that a coming Office service pack will add native ODF support to Microsoft Office ahead of OOXML support. And, South Africa appelas OOXML adoption. Will Microsoft Office 2007 become the first Office suite to support ODF?

At the heart of the issue is the lack of reference implementations. ISO is way behind W3C in this area. Could someone please tell ISO that open source reference implementations are an absolute necessity when working with standards for information exchange?

From the W3C technical report development process section 7.4.4:

Preferably, the Working Group should be able to demonstrate two interoperable implementations of each feature.

It is simple really. The benefit of a standard is created when it is used. Open source reference implementations shortens the time to market for everyone implementing the standard in their products and also disambiguate interpretation of the standard specification.

Tim, please tell me you know someone at ISO that can fix the process.




Dear lazyweb, please pimp our balcony

Spring is in the air and it is time to start using the balcony. Currently it is in a state of decay and mainly used to store old furniture. I am out of ideas what to make of it. Can you help? Measurements below.


Does your webserver give HEAD?

In the process of constructing a crawler that finds and checks PDF documents on a website I discovered a lot of sites that don’t return information for HEAD requests. A HEAD request should return the same set of HTTP headers as a normal GET request only without the actual payload.

The typical response seem to be status 500 (internal server error) on a lot of IIS sites. So, now is a good time to check your own sites to see what you get back from a:

curl --head http://www.mysite.com