Prezi – Rich User Experiences

Web 2.0 signals a major change in the software market – we are moving to a platform where users can create and disseminate content using powerful, desktop-replacement applications on the web. Rich User Experiences is a design pattern that Web 2.0 exploits to deliver desktop like applications, powered by JavaScript, XML, AJAX, SOAP and REST technologies. These web apps provide the same features as their desktop counterparts, but have the added advantage of being connected and available anywhere there is an Internet connection. Multiple and lengthy installations are a thing of the past, and data is liberated and can be shared with friends and colleagues.

There are a lot of examples for Web 2.0 applications which make use of engaging interaction to provide a rich experience – sites like Microsoft Photosynth, certain advanced features of YouTube like Leanback and the Queue, and tailored experiences for mobile version of apps (like Google Search with their Instant Preview on mobile). These products and systems all have specific goals and offer a tailored solution to a specific consumer need, while also making use of strategies for rich user experience.

One very unique application in the domain of rich user experience online software is Prezi. Prezi is, as you may guess, a presentation program. This may make it sound like Microsoft PowerPoint, but this product is nothing like the presentation programs we are use to on the desktop. Prezi is a zooming presenter – there is no concept of pages or slides. Everything is on one large canvas that can have text, headings, images and video embedded, and then “paths” (the navigation structure) placed over top to control the presentation flow. This means there has been a lot of effort to tailor the presentation development process to match the goals of the system and also the capabilities of the web.

A screenshot of the Prezi edit interface.

Prezi is a very unique example of tailoring an existing desktop application and adapting it to suit the patterns employed by Web 2.0. The Prezi system is available across platforms – any desktop or notebook via any modern web browser and also on Apple’s iPad, and includes similar tools to a desktop presentation app – minus some more advanced features such as transitions and build-ins. This in itself seems like Prezi just hasn’t bothered implementing these features because they are unimportant or are technically impossible. On the contrary, Prezi is simply focusing the core components required to get innovative and exciting presentation software onto the web. This is an important factor, as the simplicity in designed creates a focus on a compelling workflow. This differentiation is critical to focus the use on Prezi’s core competency – engaging and highly visual presentations. This reduction and condensation of tool set also simplifies the use process – users can learn quickly. All controls are highly visual and designed to be natural to use. The only confusing part of Prezi is letting go of preconceptions of what a presentation is, and this means the user needs to consider if they can work with such a radically different presentation paradigm.

Looking at the parts that Prezi doesn’t do so well at, the main issue is deep personalisation. While tools are easy to get to, there is no “shortcut” system where the most frequent tools are made easier to access. There is no setting to automatically remember a user’s preferred security/Prezi visibility setting. These issues are easily fixed, and aren’t critical, but would add to the entire experience. Overall though, Prezi integrates the best practice for the Rick User Experience Web 2.0 pattern.

As part of my personal commitment to practicing what I preach, I created a basic Prezi about this weeks content. It is embedded below.

 

Prezi will give me a new way to communicate ideas with an audience. How could you use Prezi?

References

Ray, B. (2011). Google squeezesthumbnails into mobile search. Retrieved April 1, 2011 from http://www.theregister.co.uk/2011/03/10/google_preview/

Stewart, A. (2007). User Experience, Rich Internet Applications and the Future of Software. Retrieved April 1, 2011 from http://www.zdnet.com/blog/stewart/user-experience-rich-internet-applications-and-the-future-of-software/256
 


Google Wave – Innovation in Assembly

Innovation in Assembly is a design pattern positioned within Tim O’Reilly’s vision of Web 2.0. The pattern deals with the explosion of data available online due to shifts in technology and related use cases. The ability to leverage this data has become a key strategic advantage. There are a lot of companies leveraging their systems and data to deliver more customer value, including Flickr’s App Garden.

Google is well known as an innovative company which continuously strives to make use of new data sources and the information it makes available through it’s own knowledge of the Internet (i.e. through it’s massive search index and property portfolio). There were a lot of Innovation in Assembly examples that could be covered here: Google Public Data Explorer (a great way to view worldwide statistical data; I was tempted to blog about it) is just one example. Regardless, I found a much more useful tool (on a personal level) from Google: Google Wave.

Google Wave displaying the wave used during the development of this very blog post.

Google Wave is a tool that facilitates communication, data aggregation and content creation in one central location. The tool provides a collaborative environment where users work together in real time on a “document”. A document can be anything, and Google allows users to add Maps, video, images and so much more data from Google’s own sources, plus from search results. Developers and organisations can build their own gadgets to provide user’s access to their data using Google’s open APIs for Wave extensions (called Gadgets and Bots).

The Wave tool is currently not under active Google development, but there is an active community of people using the tool and developing Wave gadgets and bots. Because Google makes the product open source, provides API options (for bots and extensions) and provides clear community guidelines means people are willing to contribute to the product. So much data becomes accessible in a flexible, collaborative environment. The openness of the platform and the flexibility provided enables collation and connection of data from any web-connected resource (As per O’Reilly’s statements in his famed 2005 article on Web 2.0. Google emphasises community and platform involvement, making functionality highly available and involving people in the process (see the Community Principles) of development. This entire integration of data, sources, users and Google means the Google, although not building the product, has created a product which is the epitome of Innovation in Assembly.

The power of Google Wave lies in the power of Google’s Search capabilities, the ability to link to a Wave (making data available – part of the “Data is the Next Intel Inside” design pattern) and the ingenuity and contribution of community developers. The openness of Google Wave means it can be reused and modified under Open Sources licenses, enabling remixing (another part of the Innovation in Assembly design pattern). Core Wave functionality is empowered by Google using it’s own API hooks to their services (such as search and Maps) and allowing users to create Gadgets and Bots which introduce new data and functionality.

Google Wave changes the way I collected and recorded blog content. What will it do for you?

TweetMeme – Data is the Intel Inside

In O’Reilly’s core design patterns for Web 2.0, one of the most important is Data is the Next Intel Inside. This pattern for experience design places a focus on control over data rather than control over the code, framework, software or hardware. There is strategic advantage to owning and controlling data, as it can provide new opportunities heretofore unseen on any computing platform. Data comes in all forms, and what can or should be done with the data depends on strategy. ZDNet suggests potential strategies, including creating difficult to recreate data, making data open or charging for access to data. Another example is by innovating around data, such as Google’s vision for using their data stores to produce more relevant results and new ways to query.

Twitter is an absolutely explosive product, with over 140 million tweets sent daily (about 1 billion per week). This certainly indicates a massive number of users generating a lot data and often white noise. This often makes it hard to find good, relevant content on Twitter (for example, as I write this, two of the topic trending in Australia are #drunkestievergot and #throwagrenade; it’s the epitome of relevance and importance). There is an overwhelming amount of data, and people are getting lost while trying to find good, relevant content (the use of Twitter in recent uprisings and during natural disasters aside). Even Xerox’s PARC research facility realises there is an issue of identifying relevant people and content on the service. This is where tools like TweetMeme come in – an application that sorts through the data to deliver useful information (specifically news) to users.

TweetMeme Homepage
TweetMeme aggregates news links from Twitter.

TweetMeme is a standalone product which filters through the millions of tweets sent daily for relevant news stories, stored in tweets as links to these news stories. Depending on the number of times a link appears across all tweets in a given time frame, it gets posted in the ratings, and the top links or stories are displayed Digg-style on TweetMeme as a collective news feed. As more users post, retweet or comment, the higher up the rankings it moves. Links and stories are categorised based on content, generate such Genres as Technology, Sport, World & Business plus Gaming, Comedy and Lifestyle (to name most of them).

TweetMeme can aggregate so much data from Twitter because of the very foundations of Twitter. The service is geared to display an account’s tweets publicly by default. The fact that so users enrich links with hashtags, reviews, comments and experiences provides enough extra information for TweetMeme to be able to classify the tweets into the same category and then determine the recency of the story. TweetMeme functionality is further facilitated by:

  • The fact that Twitter’s data is easily searched and can be addressed and located by applications external to Twitter,
  • Twitter is an open platform that is open to innovation and does not protect tweets and data from reuse,
  • Twitter users are generally taciturn in their approach to reuse of their data –  people enjoy novel reuse of their data,
  • The barrier to entry for both Twitter and TweetMeme is low (TweetMeme actually uses Twitter’s OAuth service for login), encouraging data to be placed in the system by users,
  • Data is reused – users can retweet from within TweetMeme, creating new content in Twitter and boosting the value of existing content both within Twitter and within TweetMeme,
  • Data is enriched – TweetMeme allows a user to post a comment to a news story – as a new tweet. This enriches the entire data ecosystem, as TweetMeme handles more data and has clearer insight into data, and returns the data back to Twitter as a bonus,
  • The service is incredibly open – third-party websites can embed the TweetMeme re-tweet functionality natively in their site or service, increasing network effects.

The short YouTube video below explains how the retweeting and platform openess works on TweetMeme.

The result is that TweetMeme can innovatively use user created data and present it in a completely different format. The only real issue for TweetMeme is that the data is so open that the barrier for entry for a competitor service is incredibly low (Digg already integrates sharing via Twitter, so they clearly have the knowledge to implement changes to their new ranking algorithms should they choose to implement similar functionality). This means that innovation is a constant pressure – TweetMeme needs to keep ahead of the game to be successful in the long term.

TweetMeme will change the way I collect and see news; what will it do for you?

Harnessing Collective Intelligence using LiveMocha

In O’Reilly’s core Web 2.0 strategies, there is a focus on leveraging the knowledge of the crowds. The Harnessing Collective Intelligence pattern is focused on developing content that can be leveraged to the strategic advantage of an individual or organisation. There is a clear focus on producing reliable and correct data from the knowledge of experts out on the Internet. Looking around the Internet, there are a lot of examples of collective intelligence; Google Docs, crowdsourcing via Twitter and suggestion engines such as Last.fm and Urbanspoon, but none mentioned really do much in the way of building the global village.

This is where LiveMocha comes in. LiveMocha is a social language learning platform, where the users are both students and teachers. All of the courses hosted on LiveMocha are written and maintained by the community; even interactive writing and speaking “exam” submissions are checked and assessed by native speakers in the community.

LiveMocha Language Lesson
Interface for learning to speak and read a language in LiveMocha

The LiveMocha experience has myriad positive factors making it a viable way to learn a language online – course content is produced by experts peers, and is checked by other course experts. Users learning a language are supported by users who are experts in the language and can actively contribute to content. There is a focus on users helping users in both cases. As the user base and active contribution accelerates, more languages and quality content becomes available, increasing application value, which is likely to attract more users, magnifying and multiplying the effect.

Users are invited to use the service and actively participate through multiple channels, encouraging growth and customer loyalty. An account on the service is free (lowering the barrier to entry), an account is designed to be social and courses are mostly free. For users acting as experts (the language teachers), there is a point reward or cash reward system for their teaching, and this is linked to feedback from student users as to the quality of their teaching. This implicitly creates an environment where good content and better teachers are rewarded and recognised more, improving content quality.

The only real negative for LiveMocha is that the quality of a course comes back to the quality and knowledge of contributions. The quality and accuracy of each course’s content comes back to the number and quality of contributors available to develop and approve content; its all tied back to the network effect.

Overall, LiveMocha is quite unique. The service deals with a topic which is uniquely innate to a global audience, and leverages the knowledge of its users well. The number, coverage and quality of course, while considerable now, will continue to grow as the active user base grows. It will be exciting to trial this product over a longer period of time to see if it is completely feasible to learn a language solely online using collective intelligence.