Here is the definition from an authoritative vendor-neutral body:
The World Wide Web Consortium’s glossary defines Service Oriented Architecture (SOA) as
"A set of components which can be invoked, and whose interface descriptions can be published and discovered"
Note that the definition does not talk about the type of transport, it is all about interface descriptions – Web services are therefore a subset.
As in all such interface standardization efforts discovering the service (semantics) is key.
How to make apps talk to each other meaningfully (semantics and syntax) without a human in the loop is a problem old as operating systems.
This has been and remains the “holy grail” of integration. Getting apps to talk across their boundaries has been a staple of data processing since I can remember.
We have had shared memory, files, networking, RPC, CORBA, and the now the latest idea is to re-use the Web servers connected to apps as Web services by parsing the HTML or XML to understand the context of the data across HTTP [We seem to be adding layer after layer of code to do this for some reason. Maybe with the hope of adding some sort of “intelligence”, maybe not…]. When we only use TCP/IP interfaces instead we generalize and call it SOA. This tells us that TCP/IP is now a commodity layer. It is all about commoditization moving up the stack.
In all cases we need to make the plug for the socket and make sure the voltages are OK; i.e. the syntax and semantics have to work across the boundary.
The Web has given us a new way to think about this.
Links – URI and URL’s
SOA is the idea that an app can “choose, click on, and follow a link” by itself to get a job done – and sometimes fill forms too.
There are 4 major verbs we can apply to URI’s and URL’s: GET (click), PUT (put up a resource), POST (fill-in a form mostly), and DELETE (get rid of a resource), plus a few more: This is HTTP. It is meant to stateless (except for cookies and URI-held sessions ….) because the state is supposed to be held by the user – and when the client becomes an app, that can complicate things greatly – all of a sudden you have servers talking to each other with state.
Still, we have not progressed to semantics at the app layer – this is transport and syntax.
We have always depended on humans (programmers, analysts and users). We have been doing this – imperfectly – first through command line, shell scripts, GUIs, and now Web pages and forms.
When you cut and paste, when you embed a table or a picture into a document and when you save a file to be retrieved by another app you are doing inter process communications. Windows and GUI’s allow us to do this more easily, with visuals– it may be their most obvious value proposition – the basis of the MS empire. The Web is similar, we press on links to hop among and between servers and apps.
Computers, on the other hand need to be rote taught. They can crawl web sites, but they have difficulty making them inter operate without humans writing the glue. Web 2.0 is about doing this more easily – “mashing up” service interfaces using XML and HTTP and the smarts of a browser scripting language – ECMAScript (JavaScript)
SOA ideals say that we can publish somewhat constrained specs that will allow machines to automatically selects forms and links and act on them, even when that they may never have seen before.
The more realistic approach is to say that we will need humans to guide the machines, but that the molding of sockets and plugs will get less onerous – we are attempting to standardize the interface syntax and restrict the semantics to a published, well managed, machine-discoverable set. In the interim, it helps us integrate stuff with less work.
No comments:
Post a Comment