Rest Architecture Discussion


The REST hypothesis is that the semantics of HTTP constitutes a coordination language which is is sufficiently general and complete to encompass any desired computational communication pattern. I.e., the hypothesis is that GET / PUT / POST / etc. can provide all coordination semantics between discretely modeled units of programs

I'like to know... Is HTTP Turing-complete? and then Is REST Turing-complete? because for me, it looks like SqlReinvention?, after all we could say that " The SQL hypothesis is that the semantics of RelationalAlgebra constitute a language which is is sufficiently general and complete to encompass any desired computational communication pattern. I.e., the hypothesis is that INSERT / SELECT/ UPDATE / DELETE. can provide all coordination semantics between discretely modeled units of programs

So... RestIsJustSqlReinvented... take a look at MicrosoftAstoria (an experimental REST framework from Microsoft)... doesn't it look like SQL reinvented? Why is using XML for interoperability so popular but using SQL (while very common) not so popular.... why do we have XML webservices and not SQL web services? or if you like me belive that SqlFlaws make it not such a great option... why we don't have RelationalWebservices?? perhaps based on a language as the one described in the TheThirdManifesto...

I agree that the assumption and some of the structure look like SQL. But there are some important differences:

So you get the advantages of SQL (and in fact you can easily apply REST to SQL and make your tables and rows into URLs, try the reverse) without the overhead of the always implies DB below SQL. -- .gz

I think you can "try the reverse" because REST doesn' t make your tables and rows into URLs, it makes your queries in to URLs (and sometimes it makes you rows and tables in to XML). And relational queries (I think) are very good to find data you are looking for... in fact they are so useful that now those queries are embbedded into REST Urls... So now REST is transforming the browser in to an RelationalQueryTool?? Is that what intended when REST was invented?


All these words about REST are wasted. Somewhere operation dispatching is happening. Does it really matter where?

If scalability to the proportions of "the web" matters, then the short answer is "yes".

Can you explain? Operation dispatching is a light operation. It doesn't really matter if your webserver is doing it through URLs or the application is doing it via some field in a data structure. If functionality terminates on a node that functionality is dispatched. From a programming perspective there's not much difference. There's a negative in that you are floating all these URL's around that must not change. There's a deserialization issue when it comes to getting to the operation, but this shouldn't be a killer with proper packet structure.

Can you explain how REST equates to the location of "operation dispatching"? Have you read the dissertation?

Yes I have. IMHO there's nothing to it and has nothing to do with scaling. The verb set are operations and those don't really make a difference. If they do, why do they? I do both REST style and more traditional message passing. They are equivalent under the covers.


Is it possible that there are two different dimensions of scalability in this discussion? Many hits per URI, vs a network of many hyperlinked URIs all offering the same few standardized methods?

But what's the difference? It's a lookup to map to a handler. Either it's the webserver or the application doing the lookup. Applications still need to register for parts of a namespace. It's all the same. BTW, this isn't to be argumentative. I'm honestly looking for clarification here.

The verbs are generic operations, and the genericity matters in this context. What are you looking to clarify?

Clarification on how it matters. I have not seen a single detail, just assertions. I think I have said why I don't see a difference. Can you explain with some technical content why it matters? Why is it OK to embed arbitrary object identifiers in URLs and then say it's really important to standardize operations? Is it really a burden to handle operations other than post, get, etc. I don't see it. Especially when attributes passed in will be used to modify the request anyway. By using attributes you've made things less obvious. And attributes will be used because adding handlers for URLs is more complex than adding attributes.

It may not matter, depending on what you care about. I'm not a REST expert, just someone with an interest and a nose. If parts of the infrastructure (like proxies, caches, gateways) are going to help with upward scalability (for instance, for a cache to provide a performance boost to a client without offering the client bad data), then that semantics of the data in messages has to be somewhat visible to those infrastructure parts. A cache may not understand the meaning of what you're GETting with a GET, but it can determine from the operation (GET) and a little else whether the response is cacheable or not, and for whom and for how long. Meanwhile, it would be unreasonable to expect a cache to understand an arbitrary procedure call (RPC) well enough to implement caching policy against it.

Years ago I worked on a broadcast distribution system for ticker data. We held to the philosophy that messages, once let fly, were opaque to the infrastructure, because that's "good layering". But that degree of opaqueness made it impossible for us to perform downstream filtering, which would have been very useful as further requirements turned up.

Why is embedding arbitrary object identifiers while using standard operations viewed as inconsistent? The architecture does not call for total visibility of application semantics: it calls for limited visibility for the purpose of enabling an infrastructure. There's a tradeoff there. The more you keep to the standard operations, the more help the infrastructure provides. You can extend the operation set beyond the generally understood scope, but you're apt to trade off performance by losing effective caching. That's for you, the application protocol designer, to decide.

We need MarkBaker here.

Filtering requires a common data format and a common filter language. It's not really dependent on REST or messaging. In my project we use a binary properties format for messages against which filters can be applied. As for the caching this is very tricky when considering arbitrary applications. You can realistically cache web pages, but you may not want your stock ticker or bank account cached. Then you get into cache aging and lots of other stuff that I'm not confident yet of yielding control to unknown infrastructure. Proxies are part of http and work for REST or whatever that uses HTTP. I honestly don't see what restricting oneself to a small verb set buys applications. I can't really think of truly generic services that are worth it. For web pages, yes, in general no. Requiring some generic operations, I think, is fine. But adding more specific verbs or more specific objects is the same.

HTTP has all of the cache-aging stuff built in. Furthermore, it is not currently useful to compare REST the distributed objects style of programming because the distributed objects paradigm is not really under discussion. If we are talking about REST versus the standard web services stack then you must keep in mind that the standard stack does not have any standard for the addressing of individual objects at all. Rather, what you address is a messaging endpoint. Objects behind that endpoint are addressed using proprietary addressing mechanisms, whether "stock ticker names" or "purchase order numbers" or "tModel UUIDs". Because it lacks a standardized addressing mechanism, it is impossible to pass references to objects from one service to another service. So REST standardizes the addressing mechanism to use URIs. Now any service can address information in any other service.

The next interoperability problem you will run into is that in following a reference from one service to the other you do not necessarily know what methods are available on the other service. Therefore REST standardizes these also. In particular, the standardization of GET allows one data object to incorporate data from another by reference without knowing anything about that other object's interface.

The next standardization issue you will run into is in data formats. This is where REST leaves off. Obviously it is not possible to standardize the whole universe into one vocabulary so REST only standardizes what can be standardized and leaves vocabularies to be independently standardized. There are a variety of powerful tools available for doing this including all of the XML Schema languages, XSLT and the semantic web tools. Whether you use these or not, you can benefit from the two levels of standardization that REST *does* provide, above and beyond what SOAP provides. -- Paul Prescod

HTTP includes a lot of data format infrastructure: Content-Types, Content-Transfer-Encodings, and the various Accept- request headers. Content Negotiation doesn't have to be only for dealing with appropriate presentation to humans.


Universal methods + universal IDs = the Web. I (or my program) can travel from one URI through embedded hyperlinks to anywhere on the Web via GET. I (or my program) can POST messages to any URI we meet. You can add another URI to the Web at any time and it will fit right in. No introductions necessary.

But remember this is because the semantics of the application are explicitly assumed. It's a hypertext system. It's all built in. Your banking system, telephone switch, a running car in a NASCAR race, may not want to look the same. It could I guess, just like plan9 makes everything into a file, but that doesn't mean it's appropriate. Do I want to ask my car if the carburetion's healthy or do I want to navigate to the carburetor health page and do a get? I want that choice as an application. REST doesn't give you a choice. It promotes every single object into the namespace just so a common get operation can be applied. Not a win really because non-hypertext systems have larger binding semantics then just traversal and invocation. There are state machines that must be adhered to that aren't in the hypertext model.

The Web is a hypertext system. REST was an attempt to conceptualize its architectural style - somewhat after the fact, and then becoming an influence on further development. I think the reason it has surfaced as an issue now is because Web "services" have popped up as a software business re-engineering tactic that do not adhere to the Web architectural style. I don't think anybody is saying that everything in the universe must adhere to that architectural style. But do understand what it is, why it is that way, and what it's good for (and not good for). Horses for courses, once again.

I'd agree if REST wasn't be proposed as the philosophically correct alternative to SOAP/XLM-RPC in general.

"Do I want to ask my car if the carburetion's healthy or do I want to navigate to the carburetor health page and do a get?" You want the latter because then you can provide the URI to the carburetor health page to a system that may not know about your car in particular but DOES know about "the car part health markup language." The more addressable leaf nodes you have, the finer the granularity of the references you can make. And the more general the methods on those nodes, the more general the software you can write that works with the nodes. For instance Google is an example of an application that knows how to do interesting computations on every web page in the world without knowing anything about the context or meaning of those web pages at all. Meerkat can work with any RSS document without knowing the vagaries of any particular content management system's API. Getting groups to standardize on node formats is much easier than getting them to standardize on full APIs.

"It promotes every single object into the namespace just so a common get operation can be applied."

What is the harm in promoting every single object? The benefit is that you may decide after the fact that you need links to them. Extensibility is a huge goal of the Web in general and REST in particular.

"Not a win really because non-hypertext systems have larger binding semantics then just traversal and invocation. There are state machines that must be adhered to that aren't in the hypertext model."

Have you ever booked a ticket on Expedia? It has a very complicated state machine expressed through hypertext. The hypertext expression has a variety of advantages like for instance (if it is well implemented) I can send some body a URL from the middle of the state machine and they can continue with the computation without worrying about which state it was in. By examining the referenced document they can determine the state. This is preferable to having separate state machines on the client and the server and requiring them to stay in sync. I think it is better to let the server own the state machine totally and tell the client where it is in the process. I wonder if this wouldn't be easier to discuss on a mailing list like rest-discuss? -- Paul Prescod

Rest-discuss is much clunkier than Wiki, and the discussion never converges. We started this discussion of state machines and conversations on rest-discuss a couple of times and it always dead-ended and never picked up again where it left off. Fragments of conversation just flew off like dead leaves. In a Wiki it will get very messy and then we will (if we are good Wiki citizens) refactor and clean it up again. At the end, we could have a decent collaboratively-written exploration of the topic. (Or a bigger mess...) But moving from a Wiki to Yahoo Groups seems like going backwards. I think this is an important topic, and suggest we spin it off to the REST wiki.-BobHaugen http://rest.blueoxen.net/cgi-bin/wiki.pl?StateMachineAsHypertext


Promoting contained objects into the namespace breaks encapsulation and the LawOfDemeter. There's a lot of harm. You can't change the underlying implementation without changing the URLs which is a brutal restriction given the fine grained nature of the URLs. Google can do interesting operations because it assumes specific data types (html, pdf, etc) and the interesting operations are all related to the hypertext domain. Not quite the same banking or network management or other arbitrary web services.

Just because get/post etc are standard doesn't mean the semantics are standardized which is what really matters. What are you getting? What can you post? Once you have to define all that worrying about operation names is trivial. And I'll agree with your state machine statement as long as the post is well defined, and by then who cares if it's called post? It's all in the attributes, one of which could be an object ID and another of which could be the operation. The URL is a service access point behind which is further dispatch. What matter is the definition of the SAP and the messages the SAP accepts. Who cares about the dispatch?

The semantics (interpretation of data) needs to be standardized under both REST and RPC to have easily implemented interoperability (WalterPerry? would disagree with me here). However to have true interoperability for a particular service type under SOAP you also have the problem of standardizing that service API. Without API standardization under SOAP, every time you switch from one web service provider to another you will have the headache of an API mapping problem. This is not an issue under a REST HTTP/URI approach.

Take for example a 'Web Service' enabled trading system. To store a trade I have to wrap in a SOAP envelope the name of the 'store trade' method to invoke, along with the trade XML. Then I HTTP POST the envelope to the web service URI, and get some sort of confirmation in a SOAP response envelope. How do I send a reference to this trade to the counterparty to the trade, who understands the trade XML but has no knowledge of my trading system's API?

Under a REST approach to store the trade I would just HTTP POST the trade XML to a 'trades' URI, and the HTTP response would contain a new URI identifying the stored trade. I might want to send a reference to that trade in an invoice, along with references to all other trades I have done with a counterparty. With the REST approach it is as easy as putting links to the trade URIs in the invoice XML, and granting the counterparty access to do HTTP GET on these URIs. The counterparty won't need to worry about the implementation interface to my trade system, it just needs to do the generic HTTP GET on the URIs.

In both cases if I want to interact with another web-enabled trading system I might have to worry about a different trade XML format (meaning an XML transformation). In the SOAP case I might also have to do a mapping to a new API (meaning more client application code to write). -- RobertMcKinnon

Without a data definition you have a message with no information so I can't imagine a policy of no definition being very useful for arbitrary clients. Internally on an application it can work fine though. If you standardized the data it's no big deal at all to standardize the API. I do great gobs of work just using form-encoding, but I include the operation as part of the data. Works fine and has no negative architectural attributes that I'm aware of.

What happens in an industry that needs to exchange data today but there has not even been a mention of standardizing data formats in the industry? For example commodity trading exchanges that offer clearing facilities are planning to offer these services via XML data exchange. There is no standardized data definition available for commodity trades so each exchange is creating their own. If they use the REST approach, then dealing with different exchanges is a simple matter of an XSLT transform of an internal XML format to the exchange's format. If the exchanges use SOAP then they will also be exposing their own APIs. Until there is a security standard for use with SOAP, each exchange might also go with a different approach for handling security data. A REST architecture with HTTP and URIs removes these problems today (and avoids having to pay vendors for unfinished possibly non-interoperable SOAP implementations that are based on a protocol that is still a working draft).

If all data is made available via URIs, then as data definitions become standardized, or are revised, a data resource identified by a URI can provide multiple representations based on user requirements. For example, if today a trading application provides a trade data resource as:

 http://service/company/trades?id=123&format=internal1.0 (internal XML format, version 1.0)

Then later if an exchange decides on its own format, the application can provide:
 http://service/company/trades?id=123&format=internal1.0 
 http://service/company/trades?id=123&format=internal2.0 (internal XML format, version 2.0)
 http://service/company/trades?id=123&format=nymex1.0    (an exchange's format, version 1.0)

Then in a few years if the whole industry decides on a format, the application can provide:
 http://service/company/trades?id=123&format=internal2.0 
 http://service/company/trades?id=123&format=nymex1.0    
 http://service/company/trades?id=123&format=ftml1.0     (the industry's format, version 1.0)
 http://service/company/trades?id=123&format=ftml2.0     (the industry's format, version 2.0)

At each point a user can HTTP GET the trade XML in whatever flavour is relevant to them, or XLink to or XInclude the trade in other documents. The way I see it, REST allows for evolution of data definitions, without the extra hassle of dealing with evolution of APIs. -- RobertMcKinnon

Personally I have found I like all information in the message so every layer of software has access to it. I'd like the version number, fmt, etc in the message. There's no specific benefit to moving attributes into the name space where it's not accessible. It's likely the same code handling all the versions so the meta information will have to be extracted from the namespace and put back in the message anyway. SAP and message is the most flexible arrangement because it allows for implementations to change and naturally supports an internal dispatch the can be internally tuned and is not dictated by someone's first or second take at a URI structure.


REST is all about reducing coupling for maximum architectural 'goodness'. For the web that means attributes like secure, reliable, scalable, simple, accessible, etc etc. REST defines what the constraints on a system are if these attributes are to be sustained. There are always other approaches, but they will add coupling, for some desired extra level of cohesion. Therefore they will add fragility. URI's are defined as never changing for the life of the resource. Therefore they are chosen with this stability in mind. XML-SOAP (or whatever) functions will necessarily tend to have this property too. So SOAP etc will tend to converge on REST anyway, since REST is just a set of architectural attributes.

I can think of situations where it isn't appropriate, but it is worth trying. --RichardHenderson.

URIs are the most changeable thing in the world specially giving the small granularity of the things they must reference for REST to work. A true message is given to a SAP for dispatch so any internal churn is not externally visible. IMHO this is better and why SOAP and REST don't really converge.

That isn't REST then. REST asserts the constraint "[resource] identifiers should change as little as possible." It assumes a generic name resolution service to do this. Therefore your underlying resource can change as much as it likes, as long as its associative key remains stable. That key is most likely a primary key, and shouldn't change for other reasons. If it is changing then your data model is badly broken.

That's not a very realistic assumption nor is positing entire separate naming service, especially as I point out given that you are forced to name all the contained things you would have probably never officially exposed before. The data model is not broken. Things change. Your implementation may want to change which is why I brought up the law of demeter as a long standing design principle. You can't easily refactor with all the past decisions you have made defined as unalterable.

That is certainly not the case for web application writers, who make judicious use of the 301 Moved Permanently status code and URL rewriting rules, which allows for totally changing the underlying implementation without touching the URI space. Also, considering that all those exposed objects are linked to from the equivalent of index.html, one can easily add or reassign the available list of URIs just by changing the index lists. --AutrijusTang?


I'd agree if REST wasn't be proposed as the philosophically correct alternative to SOAP/XLM-RPC in general.

Some may be harping on that as a philosophical point, but certainly not all REST proponents are, and certainly not its chief proponents. The point is that SOAP/XML-RPC uses HTTP only for tunneling, and leaves behind the major characteristics that make the Web work (URI's and generic methods). So while SOAP applications will work (no one said they wouldn't -- that would be ludicrous), they won't allow parallel evolution of client and server as well as a design adhering more closely to today's Web, which is encapsulated in REST.

SOAP/XML-RPC might converge on REST if corporate marketing were not a force, and if so much of that marketing were not aimed directly at application developers, who will prefer an API approach to the (harder to program) REST approach.

The discussion above about URIs changing is not concrete. URIs identify conceptual resources, and should be as durable as the concepts they identify. But nothing lasts forever. What's the big deal? Concrete example, if you could.

All resources are conceptual because they are identified by a symbol which can point to anything. The URL space is an ontology. Now go around the web and see how many dead links there are. You are trying to make something the central frame of reference to make your system work when in reality the world is constantly reorganizing. The clear solution is to use the URL as a SAP not as encoding of particular attributes in the ontology.

Not sure that that's the "clear" solution in all cases. Another solution is to bite the bullet and acknowledge that some things you held references to "died", others "morphed", and anyone with an interest was notified (when they asked), updated their private records and kept on going. I can live with that, if it buys scalability and I need scalability. Conceptual resources seem to have a longer life cycle than concrete resources, though, and I suspect that's the crux of this discussion. How often can a business completely change what it means when referring to its parts? And by the way, that "constant reorganizing" (above) includes the updating of links to eliminate dead and obsolete URIs. The web sloughs off like our skin, it would seem, if we're quick. And a nice feature of dead URIs is that they don't feed dust mites. 8->

Yah, clear was wrongheaded. How scalability positively impacted? I don't really see it.

What if there were no caching of responses on the web? Would that constitute a scalability problem? If so, then consider that all responses to requests whose URIs are really SAP addresses are not eligible for caching (because the true resource is hidden). But also, in addition to scalability, what kind of "web" results when resources are thusly hidden? In reality, this problem exists on the web today. Often you're in some intensely detailed context and you follow a link that should provide a different view but similarly intense detail, and instead you get some corporate home page, from which you have to start a search for your topic almost from scratch. It's that kind of failure of semantic knitting that's of concern. REST can't guarantee good web design, while on the other hand, things like SOAP can guarantee bad web design.

I think I can see the problems with RPC for the web. Still I'm not sure SOAP is entirely a bad thing, and I wish I knew clearly where SoapAddsValue?.

Caching is the only example of scale and caching is almost entirely based on the hypertext assumption. Messaging is used more for applictaions where caching is of dubious value and in fact can be harmful. Get is the only cachable operation anyway and would not have a problem with a standardized get that could be cached.

I think this discussion is going the way of the other REST discussions in that it's talking about something doable, but is it talking about the Web? REST is a Web architecture, and the web is hyperlinked, so yes there's a "hypertext assumption", although I sure wish you'd stop using vague phrases like that. If an application is to use the Web for anything more than a HTTP tunnel, then I assume the application is requesting services that will reveal new resources that can also be queried by that same application, and so on. That would seem to be the whole point of a "web" service.

Caching can be harmful when it gives out stale or otherwise invalid responses, but then so do we throw caching away with the bathwater? Hard to conceive of doing that across the board. So the argument has to be for intelligent (read informed) cache strategies, and we're back to the problem described above.

I don't understand the sentence above that says that 'Get' would not have a problem. Methods don't have problems; people do. Which person in this scenario does or doesn't have a problem? Attempting again to plumb through the vague language, how do you "standardize a get" such that a response can be cached (I assume that's what you meant) when the resource in the response is unidentified?

Caching is the only example of scaling? So if caching is broken, then the web doesn't suffer because it's only one example? Again having trouble following the logic in this, sorry. How many examples does it take to break the web bad enough to make a difference? Why would we be interested in an architecture like that?

Hypertext is hardly vague. It is quite specific. Intelligent caching would be a service very specific to classes of application semantics. Could these be built into the web separate from the actual applications in such a way that would be universally accessible? I'm dubious having worked on several caching systems. Generalized caching makes much less sense for resources involved in a process like buying something, banking, network management, command and control, multi-player games. Google wouldn't make any sense of these because what is central is the process and behaviours in the process which are not very cachable.

Speaking of vague, the benefits of REST are vague because nobody can seem to say them in any obvious way. The best is caching and then attacking others for vagueness when REST is only vague about its benefits.

Suddenly we see the folly of anything that smells of "vaguer than thou" contests. "Hypertext assumption" was the phrase I couldn't manage. Was the assumption that there would be hypertext, that there would be nothing but hypertext, something else?

The benefits of REST are going to sound vague in a discussion where the problem is vaguely stated (or not stated at all), but that's not a characteristic of REST. The antidote is for someone to forward a web service problem scenario with lots of specific details, indicate the exact nature of the problem, and let REST experts make concrete proposals about that problem, referencing the architecture. Ready, set, go.

That's OK. I think someone should be able to state the benefits of REST in a generic sense.

That's been done, and it's been criticized as being "vague". For an argument pro REST to sound something other than vague, it should address a concrete problem. For that we need you to donate one. Otherwise, might as well retract the statement about wasted words above, in all fairness. "Cheap to offer; dear to counter." That's what we have here. Ante up, or sit it out. You can't have it both ways. More importantly, there's nothing to learn continuing on this way.

Honestly I haven't seen the benefits other than caching which isn't generalizable. There's something about making better use of the web through a specific philosophy of URL naming, which has its good and bad points. What else?

I think the point's been made, elsewhere if not here, that REST is not the solution to everything. A handful of web pundits claim that no form of RPC (CORBA IIOP, RMI, whatever) has ever succeeded to scale to web proportions. I wouldn't know. To me, REST is interesting in terms of untapped resource within the HTTP protocol. How much can be built using just HTTP? What needs to be added? What does SOAP add that's already there? What does SOAP add that's not already there? This last question is the one I care about most right now. Does anyone have an answer?

SOAP just adds standardized support for complex messages.

Can anyone show how SOAP "supports" complex messages in a useful way? Can anyone show how SOAP introduces helpful function that's not available in HTTP? I'm sure there's a body of information in answer to this last question. I'd like to get away from pro-SOAP anti-SOAP politics and examine things at a slightly finer grain to see where value is coming from. For instance, if the REST guys get their way and SOAP is dropped (I guess some want that, at least some of the time) then how shall we expect to be extending HTTP in its stead, if at all?

So it a complex message format. Nothing special really. Better than you can do with form encoding because that is a flat space. It's useful because it is standardized and supports arbitrary messages. SOAP works over HTTP and any other transport. It's not the second coming, but sometimes you do need complex data and there needs to be a standardized way of formatting that and processing it. REST is less useful for not addressing this side of things, not more flexible. You can take or leave the IDL part of the webservices junk, but and standard message format is very useful.

Isn't XML the workhorse above and soap just an empty saddle along for the ride, stirrups whipping all over the place and causing a ruckus? Seriously, what does SOAP do for complex message formats that XML is missing? Saying that SOAP works over any transport isn't saying anything unless you say what work SOAP does, which is what I'm hoping someone will say here. Given that XML is the hero and not SOAP, and given that REST and XML are logical together and yet don't constitute SOAP, the question remains: What does SOAP do? I stop just short of frothing. :-)

Can anyone explain, without using the words "Supports", "Allows" or "Flexible", just exactly where SOAP makes a positive contribution? Is this really that hard?

Saying XML is saying something, but it isn't saying enough. A standard for using XML for messaging is required. A standard is SOAP. The standard is what is valuable. A hero without anyone to save can't be a hero.

SOAP doesn't constrain XML message complexity at all. Any valid XML message can be placed in a SOAP envelope. The space that's not flat (cf. form encoding) is XML, not SOAP. What does it mean to standardize an arbitrary message? If arbitrary messages are allowed, then there's no standardization. What does it mean to support arbitrary messages? (Or support anything, for that matter?) The null set {} supports arbitrary messages, in the sense that it doesn't constrain them. Same for my big toe. Please show me a message that can be sent by SOAP and cannot be sent without SOAP. Please exclude from your example the empty SOAP envelope. Thanks.

From the SOAP Version 1.2 Part 0: Primer: Three standardized actor roles have been defined (see part 1, section 4.2.2), which are "none", "next" and "anonymous". Those last three seem representative of value added by SOAP. :-)

Is there a point? Messages are messages. If nesting is supported then the same data can be sent in ASN.1, SOAP, CDR, XDR, RMI, BLOB, etc. No big whoop. Any could be used as long as you know what to expect. I and many other would like to use XML. Some sort of conventions are needed to know what to expect in the packet. Those conventions are embodied in SOAP or XML-RPC or roll your own. The point of the standard is to know what to expect. There's no frickin magic here so I don't quite understand the opposition or what your issue is. Can you send the same data another way other than SOAP? Yes. No duh. Will the other side be able to handle it? If they understand it they can. If I have a standard format then I can create standard routing layers which is nice. Then since its XML and I can create standard composition and decomposition layers which is nice. That's about. It doesn't solve world hunger. Sorry.

There is a point indeed. Every argument in favor of SOAP I've heard so far is either really an argument for XML or HTTP, or it's the argument that SOAP doesn't prevent you from... Fine, but not constraining is not the same as adding value. If I'm going to jam the wires with envelopes and namespace declarations up the ying-yang, then I want to know what I'm getting for my investment. World hunger? I'll settle for a mere 10% improvement over messaging without SOAP. But no one has been able to indicate what that is. What can I do with SOAP that I can't do without it? What can I do better with SOAP that I can't do so well without it? Except for jumping a moving bandwagon (Fielding's bandwagon of "mass hysteria"?), there's no plausible reason to use SOAP.

Problem is, I don't quite believe that last statement, and I was hoping someone could clue me in as to an actual use case where SOAP adds value. It's not going well. When I asked MarkBaker, he said "you probably wouldn't understand unless you're very experienced in distributed systems." So I expect there's subtlety to it. I guess I'll take the question to the REST wiki. Thanks.

As SOAP is based on XML it's hardly surprising it is mentioned. As for standards they all suck. You could do better than IP, UDP, TCP, but you don't because the value is in the standard and the ubiquity of implementations that become available. That's valuable. I think SOAP is too complicated, but so what? Like most libraries you'll never so the real packets so the namespaces etc are of little concern. Perhaps the use case is too obvious. It allows you to define and send standardized messages in a community of like minded programs. If this doesn't have value for you then don't use it. You are not required to agree, like it, or use it. You are not required to see value. But others do and please respect that. I have no problem with REST except when someone says REST is best and everything else is stupid. Maybe it is best not to converge and everyone go their own way and see what happens.

I would prefer a shared understanding based on actual, physical example, rather than a dichotomy based on antagonism. The assertion "it standardizes so that like minded programs can understand each other" ought to generate some instances we could examine and see if the interop promise is met, and also to see if the 'it' of the assertion is, in fact, SOAP and not XML or HTTP or something else. Are you suggesting this is unreasonable to ask?

it's a waste of time as messages are messages. There's nothing new in SOAP so there's nothing really to talk about. If you have some specific issues then please go first.

SOAP was created by programmers, and it's exactly how I'd expect persons used to calling API functions would carve out a space in XML for programs to call eachother. The interesting thing is that programmers generally don't touch SOAP in practice, they touch *tools* which generate or consume SOAP, so the original motive (XML that looks like a function call, smells like a function call...) has become moot. It would be no harder for "web service" programming tools to generate GET-PUT-POST calls, rather than just POSTs-that-look-like-function-calls.


Security for WebServices based on RestArchitecturalStyle

Any one got information to share regarding what has RestArchitecturalStyle proponents got to share with enhancing the security (authentication, privacy, etc) of WebServices? How will WebServicesSecurity as proposed by Oasis come into it, if at all? --dl

For information pertaining to WSS 1.0 approval see http://www.computerworld.com/printthis/2004/0,4814,95198,00.html


Without getting buried in philosophy, there is a purely pragmatic rationale favoring REST (compared to SoapProtocol)...

Rest of discussion moved to RestInSoap.


To me, REST and SOAP differ in one fundamental manner:

 interface REST
 {
   Map get(URL what,Map params);
   Map put(URL what,Map params);
   Map post(URL where,Map params);
   //...
 }

interface SOAP_based { MyType myOperation1(MyParameters p); MyOtherType myOperation2(MyOtherParameters p); //... }

REST is about one final interface that everybody agrees on, with operations that are understood and that can be interpreted by the infrastructure, but where nobody knows what the content on the parameters and return values should be. These parameters should be defined out-of-band but the operations themselves need no further explanations. The way the parameters are specified will vary from one solution to another.

SOAP, on the other hand, is a mechanism allowing to define new, business-specific interfaces that nobody understands a priori but that are very fine grained. Neither the operations nor the parameters are well-defined in advance, but the way these are specified is the same for all solutions and is machine-readable. Some sub-interfaces have been standardized and can be implemented within the infrastructure itself (security, reliable messaging, session management...).

To me SOAP is a kind of 'RPC/RMI/CORBA/wathever now works' and is close to a programatic way of thinking. REST looks more like some kind of a super-NFS that scales worldwide. You can implement RPC through NFS but that's awkward. You can implement an ubber-generic ObjectOriented MetadataDriven? FileSystem where two implementations will require 200 different lines of code to get the size of a file, but that's cumbersome.

By the way, regarding the financial clearing system, the RPC-like SOAP is a much better fit: in that business you don't want to access a resource, you rather want to exchange messages, and messages are transient resources you must consume.

-- PhilippeDetournay

Thanks for writing the above: it is clearer than other explanations I have seen about the differences between the two, and doesn't stumble over attempting to clarify the various protocol issues (expectations, etc.) underlying SOAP.


I am restless, trying to wrap my head around REST. I had some struggling with SOAP years ago, but I always thought all that XML was insane overhead. However, REST seems so simple, I fear I may not have understood it properly.

I recalled reading something about the design of AdaProgrammingLanguage? years ago, and a quick factcheck confirmed my memory. At one point, it was considered to not allow Ada functions to alter globals (ie. state), but only, meaning that function calls could be optimized under some circumstances. (See http://archive.adaic.com/standards/83rat/html/ratl-08-04.html for the explanation.)

As far as I can tell, REST is simply a case of having resources, which have state, and allow some remote operations on these resources. For efficiency, instead of having only a single type of call, the operations are split into two types: function calls, which are idempotent and must be free from side-effects (and, given that very few kinds of arguments are possible, therefore only can return information about the state of the resource, hence only one operation: GET); and procedure calls, which cause some process to take place, that may have side-effects on the state of resources (including modifying, creating or destroying other resources), and optionally return a result.

What's the big deal here, what have I missed? Is REST just the separation of side-effect free functions (which can be memoized - cached - and optimized in other ways) and procedures, applied to webservices?

-- LasseHp

That's a reasonably accurate summary. However, I would note that there is a difference between 'idempotent' and 'safe' (aka 'pure'). 'GET' must be 'safe' - no side-effects at all. Idempotence merely means that repeating the side-effects in isolation causes no problems - i.e. like setting the state to '3', twice. 'PUT' is idempotent, 'GET' is safe, and 'POST' is more the traditional arbitrary-side-effects. Caching of 'GET' is supported due to purity.

That is interesting. You made me think of how the "representation" part of REST is to be understood, in the context of HTTP. Suppose the resource is a simple text, which may however be retrieved in different languages, say, danish, german and english. Is it "right" to consider this the same resource? After all, HTTP provides the Accept-Language header for this purpose, and it could be construed as a representation choice. I haven't worked much with HTTP caching, but I suppose the cache would know that each language is a different "representation" of the URL requested, and stores all language versions, returning the correct one upon examination of Accept-Language.

Now, as to why I thought of this, and why it is related to your comment, what if the server implements a translation service? You PUT a text in one of the supported languages at some URL, and subsequently you can retrieve the text in other languages, automatically translated. Now, I don't see why the server shouldn't use a lazy evaluation, perhaps statistics have shown that translation to german is very rare. So the text is only translated to german upon the first GET of a german version. Does this count as pure? Probably the german version would also be stored, to avoid retranslation. Does this count as a state-changing side-effect?

-- LasseHp

SideEffects associated with lazy or lenient computations are associated with the operation that defines the lazy value, rather than associated with the operation that retrieves it. The abstraction is that the 'PUT' operation defines all the language variations, then the 'GET' operation retrieves one - possibly completing the computation on behalf of the 'PUT' operation. In secure systems it is often important to maintain the proper context for each operation, and to use the 'PUT' context when completing the PUT operations; however, security issues are often lost on such simple examples as a translation program, and thus so are the relevant distinctions.


CategoryCommunicationProtocol


EditText of this page (last edited July 9, 2010) or FindPage with title or text search