What is the Open Data Protocol?
The official statement for Open Data Protocol (OData) is that is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today. Really what that means is that we can select, save, delete and update data from our applications just like we have been against SQL databases for years. The benefit is the ease of setting up and libraries that Microsoft has created for us the developers of Windows Phone 7 Mango apps. The benefit comes from the fact that OData has a standard that allows a clear understanding of the data due to the metadata from the feed.
Behind the scenes, we send OData requests to a web server that has the OData feed through HTTP calls using the protocol for OData. You can read more about OData here.
Where did OData come from?
OData started back in 2007 at the second Microsoft MIX conference. The announcement was an incubation project codenamed Astoria. The purpose of Project Astoria was to find a way to transport data across HTTP in order to architect and develop web based solutions more efficiently. Not until after the project had time to incubate did the OData see patterns occurring that led them to see the vision of the Open Data Protocol. The next big milestone was the 2010 Microsoft MIX Conference where OData was officially announced and proclaimed to the world as a new way to handle data. The rest is history.
Building OData on the Shoulders of Web Protocols
The IT Leader's Guide to Multicloud Readiness
This guide provides practical key insights and important factors to consider to make informed decisions in your multicloud journey.
Download the Guide
One of the great features of the OData protocol is the use of existing and mature standards. The OData team I feel did a great job identifying and using existing technologies to build upon. The following are the technologies, standards and/or existing protocols that were used in the development of OData:
- HTTP (Hypertext Transfer Protocol)
Really nothing much can be done on the Internet without HTTP so why wouldn’t OData use HTTP for its transport? Most web developers know about HTTP (or should know) so I will not dull you with details.
- Atom (Atom Syndication Format)
Most people know that Atom is used with RSS feeds to aggregate the content to others through HTTP. What you may not have known is how similar a RSS feed is to a database.
- The RSS Feed is a collection of blog posts which can be seen as a Database and a table. Databases I know contain multiple tables so that is that are where OData builds it beyond Atom where it has multiple collections of typed Entities.
- A Blog post inside a RSS feed is similar to a record in a Database table. The blog post has properties like Title, Body and Published Date. These properties can be seen as columns of a database table.
- REST (Representational state transfer)
OData was developed to follow the definition of REST. A RESTful web service or RESTful web API is a web service implemented through HTTP and the principles of REST. It is a collection of resources, with four defined aspects:
- the base URI for the web service, such as http://example.com/resources/
- the Internet media type of the data supported by the web service. This is often JSON, XML or YAML but can be any other valid Internet media type.
- the set of operations supported by the web service using HTTP methods (e.g., GET, PUT, POST, or DELETE).
- The API must be hypertext driven.
What will come with the OData Blog Series?
I hope this blog series will give the community and my readers a new perspective of the OData protocol. We will cover in depth the different areas of the protocols such as datatypes, query options and vocabularies. At the end of the series I hope to have answered many of your questions and also started getting you to identify areas and solutions where OData could add benefits to your solutions, services and products.
I also hope it will raise more questions about OData that I or others can answer and generate new ideas that can add more appeal and features to this exciting protocol.
Being a Data Experience (DX) Expert
At the end of the blog series I hope I will also make more of you Data Experience Experts. That is a new term I think I coined some time ago. What is a Data Experience Expert? Well User Experience is defined in Wikipedia as:
User experience (UX) is the way a person feels about using a product, system or service. User experience highlights the experiential, affective, meaningful and valuable aspects of human-computer interaction and product ownership, but it also includes a person’s perceptions of the practical aspects such as utility, ease of use and efficiency of the system.
Based on the UX definition we could define DX as:
Data experience (DX) is the way a person feels about using data. Data experience highlights the experiential, affective, meaningful and valuable aspects of data interchanges, but it also includes a person’s perceptions of the practical aspects such as ease of use and efficiency of the data and data transportation.
I do hope you will find this definition valuable as you gain more experience with OData and other Data technologies to also pin the title of DX expert to your resume.
Pingback: Windows Azure and Cloud Computing Posts for 12/2/2011+ - Windows Azure Blog