Monthly Archives: April 2009

A battle royale for RIA market? Isn’t something missing?

I just read Jeff Feinman’s report in SDTimes ( It is all nice, but in my view it is a colorful account of the visible tip of the RIA iceberg that ignores the submerged bulk of corporate applications. True, R.J. Owen is quoted on JavaFX’s better fit for the corporate environment, yet I did not find any consideration in this report to the main hurdles of Enterprise Rich Internet Applications, which relate to the coupling between the Client and Server tiers of a RIA. In defense of Jeff, I could say that indeed none of the 3 platforms (Adobe Flex, Microsoft Silverlight and JavaFX) deal with that challenge – all of them address only the Client tier. But then, when the subject is the RIA Market, I think that one should address all the aspects of RIA’s and not just the Client tier platforms.

Compared to the present standard of Enterprise Applications (Client/Server with a fat Client), RIA’s have very compelling advantages for Enterprises; they run anywhere – remote or Local, support On-premises and Off-premises deployment, offer a rich interactive user experience and a native platform look & feel without the hassle of a local fat client, reduce the cost of ownership, improved scalability and tighten security.

But in order to make these advantages usable, RIA platforms need to take the sting out and resolve the coupling management. A Client-Server application involves a fairly simple architecture, relying upon a permanent connection between the Server and the Client. With a tightly coupled design, there is no need to explicitly manage or preserve various logic states. Conversely, web applications, which centralize their processing in the Server, leave the Client essentially decoupled, or loosely coupled. As long as web applications feature short and simple logic processes, and a limited richness of interactivity, they can be usually implemented with standard Web architectures and simple session management. We start seeing even LAN interaction styles with Ajax, such as Google with the ‘Google Suggest’ function that brings up popular search terms as you type in every successive letter of your search word into the search field.

However, broadband internet is not sufficient for tightly coupled business applications having tens of interactive fields per screen, typical to Enterprise Applications. To get around this limitation RIA’s have to partition processing between Server and Client.  So you end up working with two physically separate and logically dependent logic sets – running in tandem on both the Client and the Server. While smarter, the Client now is required to play in concert with the Server, and to keep the session coherent requires sophisticated state and session management. So while traditional Client Server apps only needed the skills of a business developer, if you want to develop a non-hosted RIA using a Client-side platform you need to add sophisticated system programming skills to your team, adding considerably to the complexity and risk of the solution – a painful and costly sting.

There is now a new breed of end-to-end platforms that can provide a comprehensive answer to this RIA Iceberg challenge, mostly targeted at the XaaS model. Forrester recently covered the PaaS market in its report “Platform-As-A-Service Is Here: How To Sift Through The Options”, and Gartner published a similar report titled “Application Infrastructure for Cloud Computing: An Emerging Market”. One of the most popular platforms that take out the sting of Enterprise RIA’s is, but it is currently available only as a hosted platform. An alternative, which is available also as on-site (the venerable “Private Cloud”) is Magic Software’s uniPaaS, which can be used either on-site or as a hosted Platform-as-a-Service (PaaS), providing the choice “to be or not to be” in the Cloud. The key difference to Client tier platforms such as JavaFX is that this type of platforms provide all the parts of the solution – including the hidden part of the ‘iceberg’ – without needing you to develop it separately. Hence ‘end to end’.  All that’s really needed is to describe the application’s business logic and design the compelling user interface, and the platform then takes care of the rest.


Dark Clouds, SOA Obituaries, and how many angels can dance on the tip of a pin

McKinzey published a few days ago a report on Cloud Computing, trying to pin down the definition of Cloud Computing and highlight its economics, in particular when comparing holistic approaches – the cost of an on-premise data center (hardware and infrastructure, if I understood correctly) compared to the cost of the same facility in the Cloud (Amazon in this case). One of their conclusion is that above a certain size, Cloud is more expensive than On-premise.

Last January, Anne Thomas Manes published her famous blog post titled “SOA Is Dead”, claiming that most SOA projects failed to deliver the promised benefits or worse. Unlike McKinzey, she did not go into a lengthy discussion of defining SOA, but the title was enough to unleash a storm in our industry.

The question I ask myself and others is how much of the commotion is for internal consumption of the experts, and how much of it really matters to those who consume the stuff and end up footing the bill.

I do not know personally IT professionals who conducted a SOA project in order to implement a SOA. I do know people who chose to use SOA principles and applied them going forward, sometimes having to retrofit and sometimes transforming, taking advantage of the service orientation and cleaner design. So what’s in the statement “SOA Is Dead” beyond provocative semantics and a great opportunity for industry experts to express themselves?

Now comes the “dark cloud” commotion, with a very similar effect. I would be very surprised if a large company would simply go along with the generic McKinsey report and use it for decision making without a serious subjective evaluation.

My point? Let’s not waste energy on debating theology, and use that to make concepts more understandable and share experience and best practices.

Differentiating Situational and Systematic Applications

I think that there is not sufficient distinction in the industry between Situational Applications and more persistent solutions (how would you call that type of apps? Systematic? Core? Persistent?).

I have witnessed the deep frustrations of IT managers who adopted a Situational Apps tool thinking it could be used for any type of solution, and running into walls late in projects, ending up with unsatisfactory solutions both from functional and technical aspects.

That does not mean that Situational is not good – just that one should use it for what it is meant for.

The story (rather history) of Magic Software is quite enlightening in this respect. When we first launched Magic II in the mid-80’s, we were in the midst of the first wave of the situational buzz with data oriented application tools such as Framework, dBase, etc… Magic II innovated by offering a metadata driven environment, which was much easier to master for business professionals than code driven tools. So we happily promoted it to ISV’s and Enterprises, with a silver bullet message and doing away with waterfall or other development models. It worked great for a while, but as applications reached the production stage their design flaws became apparent. I recall being summoned to a large pharma enterprise by the head of their clinical tests department, because the application he proudly developed was gradually slowing down to a halt. It did not take long to discover that the data structure was so convoluted, that some lookups ended up sequentially scanning tens of thousands of records.

The majority of the PaaS offerings today are for Situational Applications. That is quite understandable, since it take a significant effort and time to develop PaaS with highly granular widgets that enable the same power of implementation as coded environments. The danger though, is that the hype and buzz are so high and blinding that many prospects to not perceive the limitations (ending up as I described above).

So my call to action is to offer more down to earth information and transparency about what a technology is good for, so that those in need for situational tools are not overwhelmed with complexity and those looking for persistent and core solutions do not try to implement them on straws.

For Starters

Welcome to my first post. For some time now I have been urged by friends and colleagues to start a blog, but I just did not feel like it. A couple of weeks ago I spoke at the Web2.0 Kongress in Munich, and was subjected to a high concentrate of Social Computing evangelism, which dented my lack of interest and probable reluctance for more obligations. Next arrived Easter and a few vacation days, which I spent at home – cooking, hosting friends and then catching up on reading. I found myself engaging discussions over various social media, and then came a few more prods from the Twitter direction. So I succumbed to the old saying – if you can’t beat them, join them. So I jumped in, registered with Twitter (@Luttinger) and started this blog. Let’s see what comes out of it.