The Discovery of France tells the story
of the unification of France as one cultural identity stitched
together out of the zillion little villages, languages, cheeses that surround
the capital of Paris. It's a cultural history
of France in the 18th and 19th centuries exploring life in the
country before it was truly unified by roads and telegraphs.
The author does a great job conveying just how large a country France is and how far-flung places are before travel and communication was easy. The chapter title "O Òc Sí Bai Ya Win Oui Oyi Awè Jo Ja Oua" gives an idea of the author's approach, in this case cataloging the linguistic diversity of various words for "yes" across France. The book is full of amusing anecdotes from travellers, explanations of religions, the role of the Enlightnement and the Revolution in unifying the co untry, and overall a deep love for the French countryside. I particularly liked reading this book as an American. Our vault into modernity comes at the same time as the founding of the country itself; we never had an isolated agrarian past. European history is quite different. I've been reading a lot of books lately. I hope to blog about them
more, reviving a 13 year old
tradition.
For my recent geolocation demo I wanted a web page hosted on somebits.com to be all Web 2.0 and dynamically load data from geonames.org. I've never really done AJAX before so I was surprised I couldn't do this remote call because of security restrictions. And I was even more horrified at the workaround. I couldn't find any place where all this is succinctly explained, so here's some notes.
The same-origin policy is a rule web browsers enforce for security. Basically, Javascript code running on a webpage at somebits.com cannot access any resources from any other domains. No access to the DOM of other sites, no cookies, no frames. And, important for AJAX, no XMLHttpRequest to a remote domain. So much for mashups where you load data from multiple sites! There's a variety of workarounds. The one I used is JSONP which turns the declarative JSON data you would load from a web service into imperative Javascript code. Where a JSON reply from a web service might be {"balance": "3942.12"}A JSONP reply would include a function call: callback({"balance" : "3942.12"})Why wrap data in a function call? Because the same-origin policy does not apply to scripts! A browser won't let you load a few bytes from a remote server as data but it will happily load those same few bytes as code and execute it. And so you execute the remote JSONP in your page and it calls your callback() function to use the data. A kludge, but frameworks like jQuery hide the mess. At first blush this seems crazy. It's insecure to let me load remote data but it's fine to let me run remote code? But I'd misunderstood the purpose of the same origin policy. The reason somebits.com can't load stuff from geonames.org isn't to protect somebits, it's to protect geonames! Without this policy XSS attacks would be trivial; any Javascript code in your browser could steal cookies from other sites, inspect private data, etc. Apparently executable Javascript is considered not-private, so I'm allowed to execute anyone else's code anywhere. (Terrible assumption, no doubt there's already a class of XSS attacks against JSONP services.) Frankly the whole thing smells rotten and shows just how complex and twisted the Javascript security model is. There's a couple of proposals for more cleanly enabling remote web service requests. Firefox 3.5 implements HTTP access control, a W3C standard where a website can say "go ahead and let anyone load this content remotely". IE8 implements XDomainRequest, which is restricted to "anonymous" requests to limit security exposure. And so the squabble continues.
There's a cool new thing in web browsers: the Geolocation API. It
lets web pages ask your computer
where
you are right now.
It's easy to
program with and already works on the iPhone and Firefox 3.5.
Try out WikiHere, a demo I built. Load the page, give it permission to read your location, and you should get a list of Wikipedia pages near where you are right now. It works great on the iPhone. It works slowly on Firefox and thinks I live at my ISP. Google Chrome, Safari, and MSIE don't support the API yet, although the Gears plugin does. I'm excited by location-aware applications. Modern cell phones have great web browers and know my location. Now any web page that I trust can give me results specific to where I am. Yelp restaurant reviews, Google Local nearby businesses, tourist siteseeing, Flickr photos, nearby friends to meet with.. All of these location-aware products can now be built easily right into a webapp. It's a pretty simple API for a Javascript programmer to work with. Basically you call navigator. geolocation. getCurrentPosition() and you get back your current position, heading, speed, and accuracy. There's support in the API for multiple samples, caching, etc. It's complete enough it looks like you could make a GPS tracker all in Javascript. Credits: I cribbed off of Ian Walsh to figure out how to use the Geolocation API. Wikipedia lookup comes from Geonames and its fantastic API, insecurely crammed into the page via JSONP so that I don't have to host any sort of server myself. Thanks to the ubiquitous jquery library for Javascript sanity. And a hat-tip to the GeoPedia iPhone app for the inspiration; it's cool that with this new API you don't need a native app at all to build this kind of function.
I took myself and my new iPhone 3GS out for a two night road trip. The iPhone really is a perfect travel companion. Here's some
travel-specific apps (and hardware) I used:
The recent
theft of confidential documents
from Twitter has proven more than ever: Internet logins suck. It's not just that it's too easy to hack our accounts, it's also a pain for us to log in everywhere. I've got 400+ logins at different sites, not to mention a variety of fake accounts and email addresses for sites I don't want an account with. And like everyone else I tend to use the same password at a lot of unimportant places. It's terribly insecure.
There is a solution for the Web login mess: OpenID, what the nerds call "federated authentication". It's simpler than it sounds. You log into a big site on the Internet like Yahoo or Google: this is your OpenID provider. Then when some little blog wants you to log in, it asks your OpenID provider who you are and logs you in on their say-so. One password for the whole Internet; it's very convenient. OpenID will also be more secure. It allows normal websites to get out of the messy business of logins and just delegate responsibility to a serious authentication provider. Think of it: no more sites storing your password wrong, no more bizarre vulnerable security questions at every web site you go to. Just one safe login at a company with engineers dedicated to getting it right. Your OpenID provider could even use simple hardware to make your authentication significantly more secure. I'd gladly pay $25/year to a smart startup to be my secure OpenID provider. OpenID is usable now. A variety of services you already use will act as a provider for you. I'm using my Flickr/Yahoo login now; I'd like to use Google, but their implementation doesn't seem to work right. Unfortunately, fewer sites will let you log in via OpenID. But a lot of the big blog sites allow OpenID logins for comments, that's a good start. OpenID is not quite there yet. Usability is still a bit awkward: my first time logging in via Flickr, LiveJournal decided my name was "M8NL6R93r5YcAH4cOe1pK .tabWS9XTGzFg--" (fixable, but confusing). And there's unpleasant politics associated with user ownership; Yahoo, Google, Apple, and Microsoft aren't interested in having their users use their competitor's logins. That's why it's time for users like you and me to start demanding OpenID logins. It's more convenient for us and it will be more secure.
Once again we're reminded that TechCrunch is not journalism, just a
rumour and speculation blog unwilling to do the work required to get
stories right.
Around 1am July 7 TechCrunch posted What the Hell Happened to the Free Version of Google Apps. The first sentence asserted "The free version of Google Apps is history.". And later, "they just killed the Standard product entirely." The sourcing was Arrington's own observation that the link to the free option was gone from the web page. And the post said "We're emailing Google for comment." (Note the present tense; did he email just when the post was published, in the middle of the night?) The story turns out not to be true. An update appeared on TechCrunch several hours later from Google explaining "In experimenting with a number of different landing page layouts, the link to Standard Edition was inadvertently dropped from one of the variations". And there the link is, back again on the front page. In other words, TechCrunch rushed to publish a story before bothering to check any facts. Not doing any investigation, not giving the subject a chance for comment. Just speculating on the basis of one observation. It's nice for TechCrunch to at least update the story with some actual facts after publication (including a snarky retraction) but the damage has already been done. For a second and much uglier example of TechCrunch's journalistic practice, there's the story of whether last.fm colluded with the RIAA to expose its users to prosecution. TechCrunch said they did, last.fm strongly denied it, then TechCrunch came back with a followup three months later. This second post from TechCrunch isn't bad, it has actual sourcing (albeit anonymous) and a bunch of detail. Only last.fm and CBS both denied it again. And TechCrunch is so compromised there's no way to know what to believe. The story is completely tainted. (The Guardian did a great opinion piece about this debacle.) Why do I care? Because I care about journalism and I care about truth. And because TechCrunch is influential and is taking over the role that tech journalists used to fill. And the process they follow doesn't safeguard the truth. The Google Apps and Google PC false stories just cause confusion. The last.fm story did real harm to their business. Journalistic practice comes out of decades of experience in acting ethically and working to get the story right. It kills me to see an important blog throw all that out.
Update: Arrington responded
to my criticism in Techcrunch comments. He's now asserting
"It was a removal of
the links to see how conversions to paid went." He also told me to "Go
kick a cat or something. You'll feel better afterwards." Guess he's
having a bad day.
From the 1901 book Men of
California, which also has an astonishing series of portraits
including the father
of the man who built my house.
|