Is .NET that good, or is Java just that bad?

I’ve been having spirited discussions lately about a public-facing API. The producers of the API have a RESTful service that serves up JSON. Great! Super! OK, now what about non-web consumers? Or even just non-Javascript consumers? They claim that we are being whiney and should “deal with it”, basically. Since dynamic-typing is the trend and “all the kids are doing it”, we (the customer) should adapt!

Now, coming from the .NET world – I thought instantly that I could whip up a service that could host REST in JSON or XML, support multiple encodings, host a SOAP, and remoting (net.tcp) end point too – all effortlessly. Several endpoints all hosting one physical service.

ASP.NET, WebApi, and WCF give these things to me ***easily***. If I wrote a service in this technology, you could consume it from anything – and easily. What technology can’t read: RESTful JSON, RESTful schema-less XML, or SOAP?

In the year 2012, why WOULDN’T you want to offer your API in every format, every style, every encoding, and every language? Answer: if it was hard to do and if the tooling or technology didn’t support it. That’s the case here. The service is written in Java and each additional thing we talked about means the developers actually have to write code. No wonder why they are pushing back so violently!

It kind of dawned on me that I take .NET for granted sometimes. When I look at things like this compared to Java, Ruby, PHP, etc – it really is almost staggering how different they are. I continue to pursue .NET and Microsoft technologies because they are, by far, the best (read: most developer-friendly) technologies that exist. The day that changes, is the day I’ll switch vendors!

So, let’s call my bluff:

“It’s easy!”, I claim. “No problem!”, I suggest.

“Oh yeah?”, I respond, “Let’s just see, mister.”

I started with creating a new MVC/WebApi project and I added a new controller:

public class ServerTimeController : ApiController
    // GET api/servertime/get
    public StuffViewModel Get()
        return new StuffViewModel();


In this StuffViewModel, that just contains properties that point to two different data types:

public class Customer
    public Guid CustomerId { get; set; }
    public String FirstName { get; set; }
    public String LastName { get; set; }



public class ServerTimeDetail
    public DateTime CurrentServerTime
        get { return DateTime.Now; }
        set {  }


I know this is somewhat non-sensical, but I wanted to have a few layers of data structures. So, when I call my new RESTful webapi:


from a browser, I get XML:


when I call it programmatically, I get JSON (by default):



In fact, by simply setting the content type in my request, I can request XML or JSON and WebApi does that for me automatically:

Uri url = new Uri("http://localhost:12698/api/ServerTime/Get");
WebRequest request = HttpWebRequest.Create(url);
//request.ContentType = "text/xml";
request.ContentType = "application/json";
String result = String.Empty;

WebResponse response = request.GetResponse();
using (Stream stream = response.GetResponseStream())
    using (StreamReader reader = new StreamReader(stream))
        result = reader.ReadToEnd();


Next, I was looking at encoding (as in ASCII, Unicode, etc). According to this article, the placeholders are there, but it’s not fully supported in this version. In fact, how it will work is just as you’d expect. You set the “Accept-Charset” to “UTF-32” or “Unicode” for example and it returns the serialized response, encoded that particular way.

In fact, for things like language – that is something you deal with in your code, not so much in the infrastructure. So, I added a couple of properties to my viewmodel class and added this functionality to the controller (just so it’s clear in the response that I “reacted” to the request for specific encodings and languages):

public StuffViewModel Get()
    StuffViewModel instance = new StuffViewModel();

    IEnumerable encodingValues;
    IEnumerable languageValues;
    if (Request.Headers.TryGetValues("Accept-Charset", out encodingValues))
        instance.PreferredEncoding = encodingValues.FirstOrDefault();
    if (Request.Headers.TryGetValues("Accept-Language", out languageValues))
        instance.PreferredLanguage = languageValues.FirstOrDefault();

    return instance;


and sure enough, from the client, I do this:

WebRequest request = HttpWebRequest.Create(url);
request.ContentType = "text/xml";
//request.ContentType = "application/json";
request.Headers.Add("Accept-Charset", "UTF-32");
request.Headers.Add("Accept-Language", "es-MX");


and my response looks like this:

<StuffViewModel xmlns:i="" 


My service is easily able to react to the request.

So – with just a little bit of code, we can pretty easily support output type (XML or JSON.. or some other custom type). In fact, the other emerging standard is to have other content types of like “text/csharp-sample” and that API call will return a C# sample for that call. That would be relatively easy to to (see this page on creating custom Media Formatters). In other words, imagine that your service supports the following content-types:

  • application/json – return the result in JSON format.
  • text/xml – return the result in XML format.
  • text/html-help-docs – returns a formatted HTML document for help on how to use this specific API.
  • text/csharp-sample – returns a C# snippet of how to use this specific API
  • text/vbnet-sample – returns a VB.NET snippet of how to use this specific API

You might think this is overkill, but wait. Imagine you have a front-end for your API. Your front-end can actually call the ACTUAL API to go get the documentation and samples. So, not only are these a convenience for your consuming developers, you just solved the content-management problem of how to present documentation.

Again, take a look at the custom Media Formatters link, mentioned above – this is a pretty great concept that is available today.

Now, onto WCF. I create a new service, define it like this:

public class ServerTimeService : IServerTimeService
    public StuffViewModel GetStuff()
        StuffViewModel instance = new StuffViewModel();

        return instance;


And voila, I have meta-date/discoverability in the browser:


and of course the WSDL:


To handle encodings and languages, I need to make that part of the interface:

public StuffViewModel GetStuff()
    return GetStuff("UTF-8", "en-US");

public StuffViewModel GetStuff(String preferredEncoding, String preferredLanguage)
    StuffViewModel instance = new StuffViewModel();

    instance.PreferredEncoding = preferredEncoding;
    instance.PreferredLanguage = preferredLanguage;

    return instance;


Why? Because WCF isn’t just about SOAP and web, I can now host this “service” as a remoting-like net-tcp endpoint, or one of the other non-web endpoint types! So, I’m not guaranteed any sort of web context from my consumer – it needs to be part of the interface.

Bottom Line:
THIS is the world of services on the .NET platform for the past few years. This represents enterprise-class features coupled with the latest technologies and standards. THIS is what a .NET developer expects to find out in the wild because it’s effortless to do this in .NET.

I expect to be able to consume a service as SOAP, or as a RESTful service in JSON (when using from AJAX) or XML, because it’s easier to work with from .NET code. I expect services to support encoding and language preferences too.

On the .NET platform you CAN very easily host this ONE service over all of those different endpoints. I spent probably 10 minutes on this sample and it does most of what I set out to do.

In my professional opinion, if you are working with a technology that can’t do these things, in the year 2012, you’re putting your effort into the wrong technology.

Tagged with: , ,
Posted in ASP.NET MVC, Mobile, Uncategorized, Web Services
5 comments on “Is .NET that good, or is Java just that bad?
  1. kentrancourt says:

    Rob, I like you a ton, and I really mean that. And I think you’re smart and an excellent coder, and I’ve always enjoyed opportunities to collaborate with you, but I need to call bullshit on this. Sorry.

    As a matter of opinion, I believe WS-* web services are such a pain in the ass to debug that an overwhelming majority of developers want absolutely nothing to do with them. Resource-oriented architectures, such as RESTful APIs make more sense that WS-* ever did. The architecture of the web is inherently resource-oriented. RESTful APIs embrace and make use of that instead of trying to unnecessarily layer RMI on top of it. It’s a good thing. I agree with you, in principle that more options == better, and to that end, I’m totally on board with supporting both JSON and XML. WS-* and/or SOAP is where I draw the line though. I just see it as a step in the wrong direction. This is a matter of opinion, of course, and I respect your opinion on this matter, for sure. (So this isn’t the part I’m calling bullshit on.)

    BUT, as a matter of FACT, Java has no difficulty doing any of the things you discussed. Everything you discussed can be accomplished with the same ease and grace as you do it in .NET. (And we’ve been doing this for quite some time.) I know you’re aware that Spring is the de facto standard framework for Java these days. Anyone who’s doing Java or Groovy and doing it well is using Spring. Our controllers look virtually identical to yours. If we were implementing the same example as you, our domain objects would be virtually identical as well. So we have controllers doing something and returning domain objects (or if you feel so inclined, you might have a separate class of object that’s an OO representation of a message rather than returning domain objects directly; this would insulate each of the API and the underlying business logic from changes to the other, but this is optional and I digress). Content negotiation (e.g. what representation / message format has been requested), marshalling (turning objects to XML or JSON), and unmarshalling (turning XML or JSON to objects) is merely a matter of a TRIVIAL amount of configuration. To be exceptionally clear on this, if I had already configured my API to support one of JSON or XML, it’s absolutely trivial to make it additionally support the other. It’s literally just a couple lines of config to say, “yes, I will accept this content type,” and “yes, I can return this content type.” Done.

    So again, I do and always will respect your opinions, but you are underestimating Java’s capabilities by an awful lot and stating that underestimation as fact. No fair.

    But what I’m willing to concede is that any Java developer who TOLD you this was difficult was perhaps not a very good Java developer or maybe if they WERE worth their salt, perhaps they were dealing with legacy code written originally by a poor developer and too costly to clean up (you know how the business never had money to invest in technical maintenance). One could make the argument that low quality developers are endemic to the Java community even, but I don’t quite buy that. Ostensibly, it’s true, but under the surface, I think low quality developers (regardless of their technology of choice) are actually endemic to large corporations and many large corporations use lots of Java. Just because there’s a lot of awful “duh velopers” (your term) running around out there using Java does not mean Java itself sucks!


  2. Rob Seder says:


    First, I don’t want to expose anything too proprietary so I’ll be intentionally vague about a few things.

    I really appreciate your insight, thanks for the comment. This all stemmed from working with Tony for maybe a week of more – going back and forth on this topic. He explained that anything beyond JSON was going to exponentially more difficult and time-consuming. I’m summarizing what I thought I heard him say. He spent a lot of time defending JSON-only – but when pushed, he explained that is was going to just be prohibitively difficult to offer more options. So, this was my way of confirming that I wasn’t crazy – that it really was THIS easy in .NET and that is frustrating, because we are forced to work with a technology that (from what I gathered at the time), wasn’t able to reasonably offer these types of data formats.

    I have some thoughts on your points above. JSON – we both agree, good-to-go, it can be easily consumed from JavaScript for AJAX calls – and some other languages support it nicely (.NET does not). JSON is needed for everything nowadays, no matter what. Schema-less XML would even be OK for us, because we can generate data structures from that easily and then we could start working in a strongly-typed world. Why? This is where I and, apparently the rest of the computing world differ in philosophy.

    Weakly-typed languages, dynamic languages, and NoSql are all the rage lately – all the kids are doing it! I’m discovering, and I could be wrong, that NoSql for example is really just the “junk drawer” of the data center, and all of this weak and dynamic typing is going in exactly the wrong direction. Let me explain: I want to leverage the compiler and tell it everything I can, so that it can enforce as MUCH as possible for me. I WANT to find errors at compile-time – not at run-time. Same thing with NoSql (and JSON for that matter) – I don’t have structure, so the “database” can’t “enforce” anything for me (data integrity nor referential integrity). This means I either have to write WAY more code to do type-checking and just general checking to make sure that the junk I pulled from the junk drawer is still in the vague structure I think it should be, or just “assume” that everything will work and deal with run-time errors later. I consider myself a software “professional”, and that is absolutely not a reasonable approach to software construction – in my professional opinion. My point is, I am not-at-all on-board with new “structureless” trend, as it flies in the face of everthing I’ve come to know that helps me write solid code. The structure of an RDBMS and the language compiler are my friends. They help me enforce rules. Without them, I’m either going to have more run-time errors or more code – and I also lose things like Intellisense. It’s a lose-lose-lose for everyone involved – well, except for the lazy programmers. 🙂 So – that’s where I’m coming from philosophically – you (along with 98% of the rest of the IT world) likely disagree with most of this, and that’s fine.

    With that said, why would I want SOAP web services? This is one of those cases where I think MS did it write and Java didn’t (in this particular case). SOAP for .NET is effortless and it just works – they’ve made it so easy. It wasn’t until I started doing interop with Java web services that I started to see the pain you are describing. So I think your dislike of SOAP likely comes from the tooling and how your technology approached SOAP. In the case of .NET, they really hit the nail on the head and make it so turnkey and easy that it really is the preferred way to interop from within compiled-code. That is why (I think) that my preference for SOAP may sound foreign and wrong to you, because it’s painful in Java – and it’s also heading in the opposite direction of the “junk drawer” movement that is currently going on. However, hopefully that helps explain why I’m swimming upstream – I disagree with the weak/dynamic movement PLUS MS makes SOAP super-easy.

    Anyhow, all my bottom line was with this – with Tony, is that it doesn’t really matter what I prefer or what he prefers. If you are writing a public API that you SPECIFICALLY want to encourage people to use, why WOULDN’T you offer it in every way possible? If this were .NET, I’d offer it as schema-less XML or JSON over REST, I’d offer it as a SOAP end-point, and as a native .NET “remoting”-like endpoint (because it’s only an additional 5 lines of config to do that) – and I’d probably find some way to offer it as a Java RMI endpoint too. Why NOT do that? In fact, with WCF, it would also be easy to offer many of those calls as RSS/ATOM feeds too – where you pass in the filter options via a RESTful URI and get an RSS of the results. In fact, just set the content-type to application/json, text/xml, and application/rss+xml – and “content negotiation” in ASP.NET’s “webapi” will handle the marshaling. This stuff is all pretty simple and basic.

    That is basically what Jamie and I talked about with Tony – which drew me to the conclusion that we aren’t doing ANY of this, “because it’s hard”! As of now, the plan is still to offer JSON-only for everything. Almost anything would be better, just SOME other option: SOAP, POX, RSS, ATOM, etc – but JSON is particularly a pain to work with in .NET. There is an open source project ( that makes it a little better, but it’s still a pain.

    Sorry for the long reply, but I did want to explain my position. And again, I appreciate you’re insight.



  3. kentrancourt says:

    I think you’ll be surprised that in large part I agree with you on the virtues of strong typing. I do a lot with Groovy lately (outside of our day job)- which is, of course, a dynamic language that runs within the JVM. So really it’s not Java (thought it’s similar in ways) but it runs on the Java PLATFORM. Groovy was heavily influenced by Ruby, but stands in contrast to Ruby in a few unique ways. One of those is that strong typing is still an option. And most Groovy developers I know buy into a little double entendre we like to use. “If you can type it, type it.” Why? Because although we like the flexibility to do weak/duck typing, we also, in the average case, still value compile-time type checking and context help (e.g. Intellisense). So I don’t disagree with you on the virtues of strong typing. Nay. I STRONGLY (pun intended) agree.

    That said, I don’t see how the argument FOR strong typing begets an argument AGAINST NoSQL datastores or schemaless message formats. And I’m not saying these concepts aren’t without their problems (because they have plenty), but compatibility with strongly typed objects isn’t one of them. There are numerous frameworks for marshalling schemaless data in the guise of JSON, XML, or NoSQL datastores to STRONGLY typed objects and unmarshalling strongly typed objects back into those other formats. Let’s use a Foobar class as an example. The message format may be schemaless, and so might be the datastore, but the attributes of the Foobar object provide me with an implicit schema. When I read an object of type Foobar from the datastore, whatever fields are defined by that class are populated from fields by the same name in the node/document in the datastore. This is a simple case. Finer control (through configuration or annotations) is possible when it’s necessary. Now, supposing I want to turn that object I just read from the datastore into schemaless JSON or XML- again, cake. Frameworks are available to iterate over the fields of the object and construct a message that looks like the object. All of this works the same in reverse (message to object to datastore) so for brevity I won’t state all of that explicitly.

    Since I did acknowledge that NoSQL datastores and schemaless message formats aren’t without their problems, I’ll quickly highlight what I believe some of those problems to be, and then tell you why I think they’re minor. To be fair, you did touch on these yourself. With a RDBMS, wherein you most certainly do have a schema, you can enforce the integrity of the data. “This column must not be null.” “This column must be an integer.” “This column must be a five character string.” You don’t get that with NoSQL. So with NoSQL, yes, you run the possibility that something you expected to be there isn’t there. Or it’s the wrong type- e.g. a field is a String when you expected a Date. There are, for sure, cases where you need to code around this. But I honestly believe those to be fringe cases. In the AVERAGE case, you need to do very little to account for this. Why? Because your strongly typed objects already IMPLY a schema- without you having to write any code you wouldn’t write ordinarily, i.e. just field declarations on classes. If you need to take further steps to ensure the integrity of that data, then you should be performing some sort of validations on those objects. But wait. Don’t we do that anyway, even with an RDBMS? If you’re expecting an integer in some form field (or message field in the case of an API) and the user (or client) gives you a String, don’t you give the user (or client) back an error message that says, “Hey. Moron. That’s NOT what I asked you for. Give me a number!” Haven’t we ALWAYS done this anyway? And if one is NOT doing this, they should be because validation is also our best defense against numerous attack vectors- e.g. XSS and SQL injection attacks. What I’m driving toward here is a question… where is all this extra code that you supposedly need to write to accomodate schemaless datastores and/or message formats?

    Ok. So, so far, I agree with you on strong typing and I disagree with you on NoSQL. This is already long enough, so I’ll try to make this last part short.

    Message formats. Tony and Mark and I do have some philosophical differences on this. I believe in designing interfaces first. In the case of a web API (e.g. RESTful service), that means you start by describing what the resources are and what you want those messages to look like. I like to start with XML. Note that this isn’t any different from contract-first / WSDL-first web service design in the WS-* world. Even if I hate WS-*, that’s still a good practice, IMO. So once I have example messages that I can live with, I back into an XML schema. Here’s the really, really cool part. Java has frameworks that can take your XML schema into account when marshalling to JSON or unmarshalling FROM JSON. So, I like to start with XML support and I then end up getting my JSON support for free. Mark and Tony both think this approach is overkill and favor a more free-wheeling approach. I don’t personally like that because I don’t like any volatility in my interface / message formats. I want the client and the service to ALWAYS have something they can both reference as the source of truth for what a GOOD message looks like. That being said, I respect their perspectives on this because it enables them to move a little bit faster. But that’s always how Mark and Tony and I have worked together. They’re trailblazers. I’m the methodical, meticulous one. When you’re mostly doing R&D for a living, as we were, that works out really well. Get a couple guys moving REALLY fast and have someone following behind to clean things up a bit.

    One final note- my allergic reaction to WS-* isn’t just a Java thing. WS-* is also a pain in the ass with Ruby. I had a group of Ruby developers literally GROAN on a call recently when they discovered they had to invoke some WS-* web services. I’d say the backlash against WS-* isn’t endemic to the Java and Ruby communities though. I haven’t seen many developers using any language other than .NET express any love for WS-* at all. It’s possible and even likely that .NET simply on account of some really kickass tooling does WS-* very well and there’s something to be said for that. But literally everyone else hates it.


    • Rob Seder says:


      First – you are one of those kind of people that should have a blog. You should REALLY write this stuff down – and I can assure you, I’d have comments on pretty much all of your posts! 🙂 If you ever do set up a blog (which is a really great idea, for MANY reasons by the way) – please let me know.

      I just had a couple of follow-up points:

      With JSON, what I mean by “extra code” is I need do things like DateTime.Parse(jsonDocument.submittedDate) to verify that submittedDate really IS a DateTime. You’ll say “Ugh! Doesn’t .NET know how to marshal that?” – yes, it says “I’ll treat that as whatever type it seems like” – so what if I get “12.12.12-14:22” in the that submittedDate property – it won’t parse as a DateTime, the run-time thinks it’s a string – and the user sees a run-time error. So, as a software professional, I refuse to put out code like that into production; code that just “assumes”. I quote myself in my classes, and this is something I’ve been saying for many, many years “Assumptions are at the root of every software bug.”. So – when you enter into this world of dynamic and weak types, that is what I see – a bunch of bugs waiting to happen. Before, the compiler enforced it for me – now I have to write code to get the same type-safety! So the next-gen technologies came around saying “hey, you hate all these rules – come to our platform, we don’t have rules!” – and it’s like people don’t realize that this is a choice, a trade-off!

      Anyhow – I’ll defer to you on the NoSQL stuff as I’ve yet to get a single product working yet with a “Hello, World!” project. I’m tried Couch, Mongo, and Raven so far – and they are all pretty miserable failures on .NET. So, I am talking out of ignorance a bit on this topic.

      Anyhow – thanks again for the insight, it’s always appreciated!



  4. kentrancourt says:

    I used to blog. I stopped. I even shut it down. I AM starting a new one, but I might blog about life, the universe, and everything rather than just tech stuff.

    Re: parsing of dates, I’m not understanding something. If your domain object SAYS a field is a DateTime: a. shouldn’t the unmarshalling process understand that? And b. Shouldn’t you be able to configure (e.g. by annotation perhaps) the acceptable date format? This is what we do. If you’re accepting dates of form yyyy-MM-dd and someone gives you, then the unmarshaller recognizes this as the wrong type. We don’t write a lick of code to accomodate this.

    On NoSQL, here’s an interesting thought. Maybe it’s NOSQL; not NoSQL. The latter means “no SQL,” but the former stands for “not only SQL.” As with all things, we must select the right tool for the right job. Document-oriented databases don’t always fit the bill. My big side project I’m working on learned early on that Mongo was the wrong choice. We switched to Neo4J- which is a fully transactional graph-oriented datastore. I’ve very happy with it. I personally hate that most document-oriented datastores aren’t transactional, but there are cases where you don’t need that. If you are trying to do analytics and crunch massive amounts of data in batch using map reduce, Mongo is GREAT. Again, must always select the right tool for the right job. And in many cases, relational IS still the way to go. 🙂


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2 other followers

%d bloggers like this: