Utterances of a Zimboe

Programming the Internet.

Archive for September 2007

Stop making copies of the social network

with one comment

Update 10/01: I was a bit late: Stop building social networks:) (was on my reading list, but it took a while to acquire a time slice from my unfair scheduler.) However, I’d like to emphasize that it’s great to have all kinds of social services, but just try to hold back from multiplicating data.

I stumbled upon this discussion and couldn’t resist… My reply on I *wish* that existing social sites would import from CSV (was that Re: Six Apart is Opening the Social Graph) on Social Network Portability.

Related to the long-awaited opening of the graph. (*yawn*)

Advertisements

Written by Janne Savukoski

September 29, 2007 at 1:03 am

Posted in Internet

OAuth a closed system?

with 4 comments

Hm. This was really a major surprise: [OAuth spec]

OAuth includes a Consumer Key and matching Consumer Secret that together authenticate the Consumer (as opposed to the User) to the Service Provider.

I.e., the service provider needs to know the consumer service in advance. Although the spec says

Consumer Secrets MAY be an empty string when no Consumer verification is needed.

the approach is clear.

So, what’s my issue? It’s simply that the Consumer Key kills spontaneous federation. For example, OAuth might get very handy with the opening of the social networks, but not all social networks can (and much less should!) be registered to each other; with these 93 services, it would require 8556 registrations. Furthermore, these registrations could slow the growth rate of new services as the users had to wait for the services to register with each other. And, while I wouldn’t like to sound too cynic, this allows services to completely deny federation with other, chosen services. (Read: ‘big services’ may easily cripple ‘small services’ by closing them off).

Please correct me, but it seems the OAuth board took a step backward from OpenID. I’m a huge fan of OpenID, and I’d like to see also the OAuth taking the bright path.

Even Google’s own protocol has a more friendly approach in this regard (see also x-google-token):

Decide whether or not to register your web application. Registered web applications have the advantage of being trusted by Google; the standard caveat displayed to users on the Google login page is omitted.

Vs. “you MAY not need to register your application.” (Emphasis not mine, although the quote somewhat adapted..) But, at least the Google doesn’t punch you to the nose for a welcome (and that’s really something.. :)

This was also interesting (from OAuth; my bolds):

Service Providers SHOULD NOT rely on the Consumer Secret as a method to verify the Consumer identity, unless the Consumer Secret is known to be inaccessible to anyone other than the Consumer and the Service Provider.

Well, is it secret or not.. So, you still need to do IP-based authorization? This also undermines the “allows the Service Provider to vary access levels”.

I hope that I’m wrong and someone corrects me. I really would like to have an open federation mechanism.

Tags: , ,

Read the rest of this entry »

Written by Janne Savukoski

September 26, 2007 at 11:27 pm

Posted in Internet

Brief note about ubiquitous communications

leave a comment »

I was just reminded of an ‘old’ thought, which I hadn’t written down anywhere. Nothing too relevant, though…

The reminder was in 10 More Future Web Trends, the prediction about “Integration into everyday devices”. I had played with a thought of applying XMPP to that domain: all those embedded terminals would contain simple XMPP clients and they would publish themselves as my latest communications resource as soon as I’d entered their presence. Thus, all my communications would follow me effortlessly. The terminals could—for example—use known bluetooth device addresses (ie., phones) to authenticate people, I guess it’s signal would be enough short-range for routing communcations somewhat correctly. Though, if I’d have the phone with me all the time what would be the point… But it’d be oh so neat. :)Nabaztag

Also, link-local would offer more lightweight m2m-comms compared to upnp.

Ah, and of course the Nabaztag should support natively use XMPP. That’s so trivial.

Tags: XMPP, Internet, Future

Written by Janne Savukoski

September 26, 2007 at 9:06 pm

Posted in Future, Internet

Schema is the protocol

leave a comment »

Just a little note regarding my last post about encodings: encoding is just encoding, and only XML schema defines a ‘concrete’ protocol. (Yeah, yeah, basic stuff. But I’ve seen this being mixed up.)

More precisely, it’s schema+encoding that define a concrete protocol, but if we’re assuming the verbose encoding (‘normal’ XML, for debugging), it’s the schema alone that suffices for defining the protocol. And, naturally, each version of an XML schema defines a different protocol. Due to the flexible processing of XML, apps are quite often able to handle several protocols. (Which albeit are/shouldbe/mustbe quite similar.)

And as encodings are completely isolated from the schema, those are transparent for the application.

Btw., this is no way an academic discussion, this is pure practice. I’m into implementation currently, all major research is on the hold currently.

Written by Janne Savukoski

September 25, 2007 at 11:31 am

Posted in Technology

Information encoding, revisited

with one comment

Update 9/25: A little follow-up.

(A concrete protocol is defined by schema and encoding. I.e., each version of an XML schema is a new protocol. Understanding this is a prerequisite for reading any further.)

Stefan Tilkov notes about some basic differences between AMQP and XMPP, and especially this one caught my eye:

One of my motivations, though, was that XMPP is based on XML, while AMQP (AFAIK) is binary. This suggests to me that AMQP will probably outperform XMPP for any given scenario — at the cost of interoperability (e.g. with regard to i18n).

Which of course is very correct for sure. XML is a bad choice for a transport protocol for a number of reasons. XML is complex to parse, and too ambiguous. These are not that critical when dealing with documents, but with live communications, all complications tend to escalate sharply.

And that’s exactly why I really would like to see some experimentation with binary encodings—at least for protocols, if not for other uses of XML. It’s only just over a year since I last argued this point. Unfortunately enough, other stuff has taken precedence over my tinkering with encodings.

Quoting myself: ;)

And, as the binary format should be the most efficient way to represent that data, it really isthe leanest ad-hoc binary format.

Ie., when done right (and why wouldn’t it be), EXI/whatever should be very competitive against any proprietary protocol—XML itself shouldn’t set much restrictions on the efficiency of information encoding. Merely vice versa, as when both sides know the respective ‘schema’ (protocol spec, like AMQP), the amount of space used for describing structure in the actual transferred content should be minimal. So, there shouldn’t be any reason at all to handcraft more transfer formats; room for further optimizations should be quite limited.

I must emphasize the following point as it’s constantly ‘misperceived’: binary encoding is not to compress, but for making parsing more simple and robust. (Well, that’s the most important feature for me, at least.) Naturally, binary encoding helps save some space as you don’t need to verbose the tags all the time, but it certainly doesn’t compress the actual content. Of course, the compression is a good selling point (to the masses), but no sentient being should constrain itself by focusing only on that petty (uninteresting, trivial) property.

Elaborating just a little bit more, the point with binary encoding is exactly the fact that it is just an encoding. That’s completely independent aspect/abstraction from the data model. So, XML has always its extensibility, was it encoded with excess number of tags or some simple, singular octets (or so, ykwim.) I don’t know how extensible the AMQP is, but I’m pretty sure it isn’t extensible in an ‘industry standard way’.

Summary:

This kind of a rant this time. Take care. :)

Tags: ,

Read the rest of this entry »

Written by Janne Savukoski

September 18, 2007 at 11:07 pm

Posted in Internet

Speculating the Google Phone

with one comment

Update 09/10: For some more ideas, see here: The Gphone is coming; how Google could rewrite the rules.

In the latest episode of GigaOM Show, Ryan Block of Engadget presented some interesting speculation about the rumored Google Phone. Ryan suggested that the Gphone would be a mobile terminal for Google’s data in the cloud—a rather obvious proposition, for sure. Although, I’d rather consider it as a client to services in the cloud, instead of mere data.

But hey, just watch the show.

[Google is] building a mobile operating system, and they want to make it flexible, extensible, expandable, and open.

(Based on a software platform by Android, which was bought by Google.)

Supposedly a Gphone’s placement was suggested to be against the more generic platforms like Nokia’s S60 and Windows Mobile, whereas iPhone has its own, more specific segment.

I’m expecting the Gphone to be an extremely interesting platform—if it ever materializes—but what I seriously would like to see (finally!) is a PC-kind of hardware standard for mobile devices. So that like with PC’s, individual hardware components would be upgradeable (memory, processor) and you could perhaps even add one or two additional components, like an RFID reader, a crypto/security chip, or even a 3D-graphics accelerator for fancier mobile game graphics; think mobile World of Warcraft client.

Btw., the GigaOM Show is one of my favorites, I recommend. Om is the man. :)

Tags: , ,

Read the rest of this entry »

Written by Janne Savukoski

September 9, 2007 at 7:40 pm

Posted in Technology