Archive for the ‘Internet’ Category
Because it will use open protocols, the goal is to let users carry their identites anywhere on the web. Updates made to those identities out on the web will make their way back to Genome instead of users having to return to Genome to edit their profiles.
This means the implementation of volatile Social Graph. Go, ruskies!
And this was very nice to hear as well:
Genome will provide an open instant messenger that’s integrated with your contacts.
(I’m sure that implies XMPP.)
Uh oh, AppEngine is the shit! I’m very excited! Might just be the News of the Year.. we’ll see about that. But, I always want more..
GAE is definitely packed with a lot of sweet features. Like having Python as the first language, and Django being the ‘featured’ web framework there. Ruby might have been pretty much as good as a language, but Python’s more comprehensive platform suite stuff is very handy to have; and I got a feeling that the Ruby community has a minor emphasis on creating “flashy web sites” over creating a bit more novel services.
Also, I was very happy to see relations and transactions in the DB interface. SimpleDB is just too simple.
But, now, I also think that (to be frank) the AppEngine is a little retarded already: the only communication pattern available is request-response. Nothing like event-based mechanisms can be implemented using the current interfaces.
There are currently at least two factors that limit the design of more responsive apps:
- connection timeout limit is ‘a few seconds’ → no persistent connections (think Comet)
- no threads or background processing
Drivers for this are clear: current API structure grants that there are absolutely no limits to distribution and scalability of the applications. (You can’t build such even you wanted! Interesting, eh?) Programmer doesn’t need to care about load balancing, state handling, platform architecture, etc. He can just feel happy that them codes are proposedly running on BigTable etc. And the abstraction really does feel successful. At least for simple apps.
But now, if you remember that
- services are switching to two-way communications to suppress polling load and cut down lag,
- amount of time-sensitive information is increasing, and
- patience of a user is decreasing (might be plural, but to be on the safe side),
you don’t need to be Nostradamus to see the severity of the request-response cycle.
Basically, I’m disappointed not seeing interfaces to Google’s messaging platform anywhere. But, push assumes quite a different application architecture so I guess there wasn’t enough business reasons — and perhaps not enough concrete experience — to design/publish that kind of a platform. Dion might know a little more about Google’s standing..
But, the thing shall be called PollEngine for now.
And yeah, I’ve been a little too occupied with work now recently, severely lagging in following the industry. At least I got my ACM Queue order in place, which I think as one of the best tech magazines. Although I need to still get a couple more issues to be really sure about it..
Whoa, couldn’t’ve even dreamt to see something like this yet:
For instance, if you land on a Business Week article about IBM, the site will then look at your LinkedIn profile (assuming you’ve given it permission to do so) and highlight the people you know at IBM.
That goes very much to the direction I super-briefly outlined before, even though these ‘features’ are only scratching the surface of the whole (dynamic! real-time?) SNS system.
And LinkedIn was thought to be dead already… Will 2008 bring pervasive social networks? And how will it mash with micro-stuff? The micro-ness is absolutely one of my favourite predictions. Exciting times ahead, hopefully also business/technology-wise (as in speaking of our little startup..)
Update: Chris Messina had similar thoughts a month earlier:
In fact, it’s no longer even in your best interest to store data about people long term because, in fact, the data ages so rapidly that it’s next to useless to try to keep up with it. Instead, it’s about looking across the data that someone makes transactionally available to you (for a split second) and offering up the best service given what you’ve observed when similar fingerprint-profiles have come to your system in the past.
Now back to my contribution…
Now that everyone’s frenzying over the gruffalo, I guess it’s as good a time as any (at least) to expand my graph post [also] a little. The post isn’t anything notable but it gives some context, if interested.
For the topic’s current initiation, see TBL on Giant Global Graph:
[Internet] made it simpler because of [instead?] having to navigate phone lines from one computer to the next, you could write programs as though the net were just one big cloud, where messages went in at your computer and came out at the destination one. The realization was, “It isn’t the cables, it is the computers which are interesting’.
The WWW increases the power we have as users again. The realization was “It isn’t the computers, but the documents which are interesting”.
Now, people are making another mental move. There is realization now, “It’s not the documents, it is the things they are about which are important”.
(See also a nice roundup on ZDNet, Who is afraid of the GGG?)
My views next.
Put it simply, my approach to building a next-gen SNS would be to extends the current feed polling capabilities to the graph aspects as well (social networks etc). For example, I have Facebook ‘importing’ my Jaiku feed currently, so why FB couldn’t just as well import changes to my Jaiku contact list and apply them to the FB domain?
The underpinning aspect here is that the future services should not try to be exhaustive by themselves, but to present the whole graph from their point of view as well as possible — be that ‘entertainingly’ or whatever is their beef. More generally, services should fetch data all over the web, and decorate it in their own, valuable way. Services would be much more interesting if they combined all the data and not just tried to hold on to that very tiny fragment (eg., the FB account) of my overall graph so fiercely.
This implicates that services would need to adapt to whatever changes in the user’s social graph, and that happens to be exactly what I want at a very personal level — ofcourse, this is also professionally highly intriguing. In practice, it’d be even somewhat trivial; just import the damn changes as you read the feeds. The difference to feed polling here is that it’s not new content per se that is fetched, but changes to existing data. So, services need to delete stuff (relationships etc.) as well. For implementation details, see for example “Dynamic Graph Algorithms” by Eppstein et al., 1999, which appeared among the first google hits (can’t remember for what; try out). (‘Someone’ should probably adopt those to rails so we could get the ball rolling…)
Of course, the most prohibiting problem currently are the walled gardens. (I expected the Open Social to do something about this, but Google failed me.) There are however at least a couple of sites that publish more data, but it’s very little still. And note that the data here comprises is the basic building blocks of the semantic web, but semantic web drafts completely ignore the fluctuating nature of data; which is why I think the semweb is born retarded.
Supergraph instead of a flat global graph
Then there was my notion about ‘supergraphs’ in the title; volatile was just to emphasize the true dynamism — if you don’t store external data, you don’t need to synchronize it. (Keep it simple.) With supergraph I mean that different kinds of graphs should not be bluntly blended but the metadata should be used accordingly. Social networks provide a fine example here, too.
This ‘supergraph’ thing should be very intuitive also: if I ever wanted my LinkedIn contacts to be mapped to Facebook, it’d be extremely nice if the LI contacts were presented in a different style than my Jaiku contacts. And, perhaps, there could be some different tools available for each kind of network — please, no vampire stuff (like, wtf…) for the business contacts, aight?
And, just to note, not all contact networks should be mapped to every service (of course), but that’s a whole another graph story and I’ll leave it for later.
So, this ‘volatile supergraph’ thingy should be rather easy to implement (no hard non-trivialities) if you was a systems designer (and not part of the 80%) and I’d be very excited to have it working. I bet a few googol other zimboes would be as well.
Expect next: dynamic, rich graphs. (Or probably not. But still,) Thanks for listening!
Dynamist artists used the concept as part of a way of representing the complexity of processes, rather than be limited by the discrete and static moments within change, which also illustrated the limits of human perception.
— found in Wikipedia
Peter notes in his presentation Jabber, the Real-Time Internet, and You [via] that the XO-1 (OLPC) uses XMPP for link-local messaging. I could of course state something like way cool!, but that would only implicate that it was something unexpected, which it definitely isn’t. Reasonable options are getting slim, fortunately! :)
Eventually. Contacts welcome.
The IM interface is the killerest, thanks guys. It feels like the way Jaiku should be used, 140 char input really doesn’t need a web page.
Congratulations, guys! That’s awesome!
Duh! The morons of stalkbook printed out my Jabber address in plain text, which naturally is the same as my personal email address. (And my phone number, if I just managed to collect myself and configure Asterisk; PBXes didn’t let you use your own domain last time I checked.)
(Fixed it already.)
ps. spammers would be unbelievably stupid not to use some very simple OCR to read addresses out of those pics…