As announced a while ago, I went to GIS Day in Zurich, Switzerland.
On my employer’s blog, I have written up a review of the event in German. Head over to find out about interesting Switzerland-based GIS projects (in-browser-translation should be fine to get the gist, I suppose).
Swiss air rescue organisation Rega uses GIS for emergency dispatching
Mark Graham has posted a critique of a “Twitter map” that featured in the Economist at Zerogeography. The map was compiled by Portland Communications and Tweetminster and shows the number of tweets per country (original version of the map can be found in this presentation by Portland Communications):
Africa Twitter map by Portland Communications, Economist
Mark Graham raises these interesting points regarding this map:
- 11m Tweets in Africa over a three months period is probably vastly underestimated, since the joint Portland Communications/Tweetminster analysis looked only at geocoded tweets.
- The analysis doesn’t account for the provencance of the tweets: are many of them issued by few users or are actually many people behind the many tweets of a country? This is likely a very relevant point, since it is found with many crowdsourcing projects that a small minority of the users contributes the majority of the content. It may be the same with Twitter, the only question which remains then is: could it be that the proportion of heavy contributors varies between countries (thus harming comparability of countries)
- The analysis doesn’t relate the number of tweets to the number of inhabitants. We have thus no way of knowing whether a big number of tweets means an extraordinarily high proportion of Twitter users in the population, or not.
Mark states that in a study conducted by him and his team using the Twitter Streaming API, it was found that only 0.7% of all tweets indeed contain geolocation information. (and thus the Africa Twitter map is based on a really small sample of the tweets which have been sent from within African countries!). That proportion was something I have wondered about since I have started to tinker with the Twitter REST API a few weeks ago. Other than the Streaming API (the so-called “firehose”), the REST API has tight query limits, so I haven’t acquired a big enough sample of tweets to actually make the judgment regarding the prevalence of location information in tweets (acquiring a random sample of tweets is also not the aim of my studies).
As Mark further points out this shortcoming on the data side makes the map potentially useless, in the worst case even misleading: Users in different countries may expose location in their tweets with different probabilities, due to for example:
- different brand mix of end user devices (for example, different prevalence of smartphones versus dumbphones (which can use Twitter via SMS)
- different mix of Twitter clients. Twitter clients may expose the location sharing settings in different ways and may rather encourage or discourage a user to opt into or out of location sharing
- varying awareness of, or views on, privacy issues around location sharing
- different societal norms towards location sharing
If the prevalence of location sharing is different in different countries, the Africa Twitter map cannot serve even as a proxy of the true numbers of Tweets sent from African countries.
Further takeaways thanks to Mark Graham:
- Using the location information in description fields of Twitter users’ profiles is a bad substitute for actual location information attached to tweets.
- Time zone information as another approach to rough positioning of a Twitter user isn’t a feasible alternative route either, since many users don’t bother to set it in their profile.
- And, most importantly and generally applicable: Any analysis of data from social media or crowdsourcing initiatives has to scrutinise the data for potential confounding variables, inherent biases, flaws in data collection (sampling), data processing and analysis. No analysis is complete without these questions asked – if they’re not clarified in the analysis, it’s the end user’s duty, though unfortunately it can be difficult without access to the raw data.
Via the GIS Doctor (in itself a fun blog) I got introduced to NY Times’ Opinionator. The Borderlines category on the Opinionator is maintained by author/blogger Franc Jacobs who “writes about cartography, but only the interesting bits.”
Borderlines writes about interesting stories around country borders. So far, I’ve read the superbly entertaining and well informed “Where is Europe?“, which deals with the problems geographers face(d) with regards to defining the geographic extent of “Europe”. It’s all there: the perspectives of the Britons, Swiss, Croats and Eurocrats, Turks, Russians, …, the back and forth of the European boundaries, especially (but not only) those in the east and south and some surprises, even for geographers.
There's a plethora of pitfalls when dealing with this beast... (NY Times)
Is this an off-topic post on my blog?
Eh, depends: because, here we have great examples of the fiat and bona fide divide when it comes to geographic regions and objects, the fuzziness or spatial vagueness of toponyms and the regions they denote, qualitative spatial reasoning, vernacular geography, etc. – some of which I have dealt with also in my PhD thesis and which also pertain to stuff you do with GIS, spatial analysis and geovisualization.
Very interesting stuff! But for now I’ll spare you the details and recommend “Where is Europe?” and Borderlines.
The net is abuzz with the news about Google+, Google’s newest attempt to counter Facebook’s dominance in the realm of social networks. Besides India and Brasil, where Google’s Orkut seems popular, the search engine giant has so far failed to successfully enter the social network ground.
"On one hand, you'll never be able to convince your parents to switch. On the other hand, you'll never be able to convince your parents to switch!" (by xkcd)
Currently, Google+ is invite-only, so no hands-on testing. But what can be said already from one of the teaser videos is that the visual design of the newest Google product deviates somewhat from what we are acquainted with out of the Googleplex. Engadget has collected numerous trailers highlighting Google+ features, the one in question is The Google+ project: A quick look, embedded below: Continue reading
With the 2010 BP oil spill in the Golf of Mexico and the 2011 Fukushima nuclear power plant accident, environmental disasters have gotten big coverage in the mass media over last months. However, when the biggest shock and public outrage has passed the aftermath of such disasters tends to be less newsworthy to traditional media outlets.
“Big Energy has their communications war room. Counterspill.org is ours.”
This is the claim of Counterspill whose assumed mission is to “promote awareness about the impact of non-renewable energy disasters through a living archive that combines best-in-class reporting, research, social media and community engagement.” Basically, the idea behind Counterspill is to provide on a one-stop portal a counter-narrative to the non-renewable energy industry’s narrative. Counterspill has been launched in April 2011. Its sponsors and partners are primarily philanthropic and environmental organisations as well as NGOs.
The information in the disasters rubric on Counterspill’s website can be accessed directly from the front page. It features an interactive world map and timeline mapping accidents including gas, oil, nuclear and coal in space and time. The timeline can be dragged to select a time window. Using filters one can include only accidents of a certain kind, with a defined size of cleanup costs or involving certain companies. (I assume said cleanup costs also control the size of the circles on the world map, though, this is not explained anywhere)
Counterspill world map
Through clicking on one of the disasters in the world map, or directly in the disasters rubric, one can get information about individual accidents. The section is very well designed: The central element is an interactive timeline. Continue reading
While being familiar with the context of, and – from a GIS perspective – research into, OpenStreetMap (OSM) and other crowdsourcing efforts in the geospatial field, I was not aware that there are books dedicated to OSM. Over at Po Ve Sham Muki Haklay hosts a comprehensive review (by Thomas Koukoletsos and himself) of two OSM books. The two books covered are:
The former book also has a dedicated website in German and English.
In some parts the review by Thomas Koukoletsos and Muki sounds relatively equivocal, but the two come up with a strong recommendation at the end. Head over to Po Ve Sham for the full details.