Well here we go, it is almost 12:30 am and I am just back to my hotel room. It must have been something like 10-11 hours of Devoxxing - and that was only the start. Despite the fact that I feel a bit tired and brain drained - I am forcing my self to write the day's review so that friends, colleagues or anyone interested, especially back in Greece can read the post - early in the morning tomorrow while checking corporate email etc.
The NoSQL, distributed , high scalability wave of things in the database front - is getting more and more momentum over the past two years. It is clear that certain projects are getting mature, eventually while being widely adopted, so more people talk or buzz about them. Hadoop is becoming more and more famous - so I felt like - attending this 3 hour session - getting some serious introductory knowledge about it.
The session was split into 2 parts ,for first one (which I liked the most) was about HDFS - the Hadoop distributed file system originally developed by Google and the MapReduce Technology. I have to admit during the session I acquired a sufficient technical overview of both technologies - eventually some things were made very clear about the positive and negative aspects of them. One of the most important things noted relared to HDFS is that is not indented to be used in cases were you want to store small files - the HDFS blocks are 64 to 128 MB large anything smaller could result to loss of space in the local file system.
The second part was about Hive and Pig 2 competing technologies aiming to provide to the developers a layer of querying support - in order to extract or manipulate data out of a Hadoop DB. Hive is offering an SQL-like interface (although it is very very limited) and was originally developed by Facebook.
On the other hand Apache Pig which was originally developer within Yahoo - uses a specific expression language in order to extract large data sets our of Hadoop.
Both of the technologies demonstrated caused me a bit of frustration mostly because of the current limitations (or developer unfriendliness). I can clearly see that the original developers of them were having specific use cases and for these 'cases' the tool is delivering it's promises. I was just a bit puzzled on trying to find a know to me use case where the combination of Hadoop - Hive/Pig could be used.
Overall though very interesting presentation that made things clear about what is Hadoop, what it's main purpose, which are the main use cases and which are not.
This was actually my favorite session for the day. So starting with Hadoop the session of MongoDB seemed to be more compelling comparing to others plus as I have already elaborated the noSQL wave is getting bigger and bigger every year.
So what is MongoDB, a hybrid DBMS that aims to combine the best out of 2 worlds the traditional RDBMS (Oracle, MySQL) and the fast, no sql mem-based key-value pair hash based technologies. In MongoDB there is no such a thing as a database Table but we treat everything as a document. At the same time there are not relationships in the form of SQL relationships + so there are no joins instead you have lists of documents within other documents forming trees and links between them.
I have to admit I am very enthusiastic after this 3 hours session and I have promised my self to try Mongo as soon as possible. My main motive is to try out the new way of modeling data - thinking in documents , lists and maps rather than tables, foreign keys, joins and enforced constraints. It seems that this type of modeling data within an application in many cases can prove to be much more efficient for the application and the developer (since it is simpler - and you have more time adding new functionality - rather than fighting your own domain model). Of course MongoDb is not perfect (for example no distributed transactions - everything is an atomic transaction or there is not strict enforce of constraints for 'document relationships comparing to our RDBMS world, but still sounds very compelling to try it out. The scalability characteristics were also very promising - sharding and partitioning seemed to be reasonable powerful,
So note to my self - play with MongoDB within the next month and blog about it!
This talk was about the PrettyFaces servlet filter extension that brings to the Java Web Developer (of any web framework) the power of proper and clean URL re-writting. URL rewritting can be a pain we all know it, frameworks like JSF especially in early versions do not help at all (ever noticed the ugly urls? I guess so). The flexibility of PrettyFaces and the simplicity implied on actually integrating to any web app - really gained my attention. Worth looking at + considering if you want a nice tool - technology for making your URL's nice and clean. Eventually as the presenter was noting down the technology can be used in many other cases (apart from plain URL rewriting) and is considered to be embedded on stacks like Seam3.
This session was my least favorite of the day, it was not bad - but eventually not in my main area of things to watch out. We all use VisualVM (or quite lots of people) this session was around hacking anew Visual VM module and integrating to the existing app for monitoring cpu cycles.
Session 5: After an hour break, I joined the Seam 3 Gathering featuring some of the main developers and leads in this technology stack. Worth noting the request from the users of completing the Seam 3 in Action book + additional documentation on migrating projects from older versions of seam. There was a lot of buzz around Seam Forge, a tool (technology) towards rapid project setup , resolving dependencies and introducint various modules.
Session 6: Despite the fact that - I was not greatly interested in the Adobe technology stack - I ran to the second half of their BOF just listening to various questions and requests. Managed to pick some statements like - eventually the Flash/Air player is getting faster on specific platforms (like the Mac) and side- effects of high CPU loads on just running a player or some related application will just vanish. There is a lot of buzz around Google TV and Samsung which are partnering with Adobe on Flex/Air bundling the technology to their sets.
It was very nice seeing Heinz again after some time, I actually had the chance to share my late dinner with him- chatting about life in Greece the economic downturn and of course Macs!!!
Heinz talked about his latest community project jpatterns an effort to build a annotation library that will be used by developers so that they can annotate clearly in the code base - concrete implementations of known design patterns - like those defined by the famous Gang of Four or J2EE design patterns. A very interesting discussion started regarding current experiences on applying design patterns, good and bads, how developers might treat such a annotation set, if this is actually going to help developers or pollute its code and many other stuff.
From my experience I have seen most of the cases developers over engineering their code just to show off some design pattern knowledge without considering if the code written is actually worth using, reading or at least 'in the context' of the actual need. I really hate this over engineering. Overall patterns are ok to implement but need experience and good design knowledge in order to be applied properly.
So this was my review - for today- I still have a couple of links for things I have heard or red on twitter and sessions I was not able to attend but it is already very late (writing this one) and I have to get ready for tomorrow. Overall - a very nice start - with interesting sessions and lots of notes in my devoxx notebook. I also got the privilege to get my Activity T-Shirt by T.Baeyens and the rest of the team - I was very happy to see them and talk about Activity and recent JBPM developments.
Thanks for your time! You may find some today's photos here. I will be uploading some new stuff every day