Wednesday, June 12, 2013

Book review: Hadoop Real-world solutions cookbook (Packt)



Introduction

Hadoop is a cutting edge tool and everybody in the software industry wants to know about it. Initially we learn that large amounts of data coming from web 2.0 applications and social media, are rich of valuable raw data. And then, a quest for the best processing tools begins.
NoSQL movement is highly connected with the big data technologies, and their evolution appeared to be remarkable. Hundreds of new persistence solutions and frameworks have been released. Some of them offering high quality and some just being very well advertised. All of them are offering, in short, the advantages of: being easily scalable, giving great speed of random access, storing more intuitive structures that need less time for mapping programmatically.
World leading technology companies participated in promoting development of these technologies and one of the most popular algorithms developed is MapReduce, and Hadoop managed to be one of its mainstream implementations.
Ok, we learned the basics, and now we have some production applications to implement and maintain. Of course we will use different data sources, including text files, relational database servers and NoSQL clusters. And of course there is a large variety of useful tools out there, to choose from. To begin with, we need to decide which tools to learn first, which are the most appropriate for our case, and how exactly to solve possible problems.
Hadoop Real-world solutions cookbook by Jonathan R. Owens, Jon Lentz and Brian Femiano, is a book that does what it promises; it offers recipes of real world working solutions using Hadoop alone, or in well collaborating systems with supplementary open source tools. The recipes are organized in 10 chapters, and every chapter is separated into several sections. Each section has the format of a “how to” article with preparation steps, execution steps and explanation text. Code examples are extensive, and they are available for download along with sample datasets used throughout the book (after registration in Packt publications support site).

 

Chapters 1-2

We learn that there are command line tools helping to import our data files in the magical Hadoop file system (HDFS). And if we have some relational data in a relational database server, we can export and import them using an open source tool called sqoop, in collaboration with JDBC. The most advanced recipes include real-time access of HDFS files from Greenplum, as external tables, and importing data to HDFS from special data sources using Flume. Next, we learn how to compress our data in HDFS, and using different serialization techniques (Avro, Thrift and Protocol Buffers).

 

Chapters 3-4

Some great traffic statistics and analytics can be created and processed using MapReduce in Hadoop processing environment. The recipes in these 2 chapters explain how apache web server log files can be processed, mainly using Pig and Hive, in order to extract very useful information as session logging, page view calculations and geographical event data. Moreover, there are recipes that explain how log files can be mapped as external tables, and proposed recipes for using effectively other external data sources, as news archives.

 

Chapter 5

A whole chapter is dedicated to the concept of joining datasets. There are recipes giving example of replicated join, merge join and skewed join, mainly using Pig. Additionally, more advanced techniques are presented, for full outer joins, increasing performance, using Hive and Redis key-value store.

 

Chapter 6-7

In these two chapters, big data analysis is the main concern of the recipes. Initially, simple MapReduce recipes using Pig and Hive are presented, to process large amounts of data and derive complex pieces of information. Timely sorted aggregated data, distinct values of a variable from very large sets, similarity of data records and outliers discovery in a time series. For better facing this kind of problems, the author suggests graph processing technologies and machine learning algorithms, so, chapter 7 presents recipes using Apache Giraph and Mahout in collaboration with Hadoop.

 

Chapter 8

Chapter 8 is dedicated to debugging. Of course a lot of testing should be performed on every aspect of any distributed MapReduce solution. For this reason, Hadoop offers Counters mechanism that exposes the internals of every map and reduce phases of jobs, in a very practical and user friendly format. Furthermore, a unit testing framework called MRUnit is presented, with the basic features of a testing framework, but for map and reduce phases. Going one step further, the author presents a recipe for generating test data with a very powerful Pig tool called illustrate. And finally, a recipe is addressed to running MapReduce in local mode, for development reasons, enabling local debuggers from within the IDE.

 

Chapter 9

Chapter 9 is dedicated to administrative tasks. These recipes explain how distributed mode is configured in Hadoop systems, how to add or remove nodes on a cluster, how to monitor the health of the cluster, and finally some tuning tips are provided.

 

Chapter 10

In the final chapter, the author suggests Apache Accumulo for persistence layer. Inspired from Google’s BigTable, Apache Accumulo has many unique features as iterators, combiners, scan authorizations and constraints. In combination with MapReduce, example recipes present loading and reading data from Accumulo tables.

 

Conclusion

Overall, this is a recipes based cookbook, and as such, it contains task driven sections and chapters. This is not a book that may be read from the beginning to the end, but it would be better to be used as a reference. In other words, not all of these recipes are appropriate for every reader. The reader being experienced enough, can execute the recipes -that sometimes include downloading tools source code from github- and use this cookbook to select certain tools and solutions. Finally, I would like to note that the recipes are addressed to all the range of IT professionals: developers, dev ops and architects, and I think that the better way to use it is as an asset of a development team, or a guide for experienced developers planning “one man show” startups.

Saturday, January 26, 2013

My opinion on technical debt

Definition of debt.

Speaking for a debt, we have to know that there are two participating parties: The debtor and the creditor. The debtor has an idea, and needs some assets to realize it. These assets are not immediately available to the debtor, but they are estimated to be available to him in a computed amount of time. Here is the point where the creditor enters the game. The creditor has immediately available these assets that are needed by the debtor, and he wants to sell them to him. Of course, the transaction cannot be completed once-off, so the two parties agree on an arrangement. The motivation of the creditor to start with this arrangement is the interest that will be derived. This is an investment for the creditor, and a bet for the debtor.
The debtor is now facing the challenge that he should succeed with his idea, so, he can return the value of the assets plus the interest, and of course, he should be able to make a profit for his own effort.
Metaphorically, everyone has an idea in any time of his life, and he always has to make decisions while realizing it. Right decisions are proven to be those that result to a profitable outcome. Profitable outcome of course means that the returning value of the assets and their interests are safe, being a wide variety of possible things, like ethics. Good ethics mean there is trust of the environment to the debtor, and the debtor can count on it in the future.

Software development economy.

Software development is usually driven by contracts between clients and software houses (software service providers). There is a system of managers, marketing teams and accountants etc. except from the technical team, participating to a project. And finally there are governments with tax returns. Both clients and service providers are obliged to return taxes while working for the success of their projects.
So, inevitably, in a system that every process is translated into money (salaries, contracts values, profits, taxes), decisions are driven by money also.
A client asks for a specific (the best he can do) solution to some problems, and he is making an investment, in order to make some profit. The software service provider offers the solution for a profitable price. When the contract is signed, the deadlines are the most important effects of the project for the client.

Parallelism of technical debt in software development.

Technical debt is of course a matter of work ethics. But it is not only that.
When a new contract is signed, a war starts at the premises of the software service provider. The contract may already have been evaluated, but in many cases only pre-sales department is familiar with it. Technical teams evaluate the project in order to estimate if the deadlines are feasible. I remind you that the contract is already signed, and of course it contains some deadlines, but technical teams hardly know that. Managers brainstorming team will evaluate technical estimations and they are responsible for the final agreement of the time-plan. The best manager will have the best idea: the new software solution may be a compilation of older legacy projects that have already been tested on production environments, so, there is a considerable reduction of implementation and testing efforts. Integration implementation and testing is usually a hidden cost to everyone, except from the developers. Managers’ favorite word in software houses are modules. And because they call them like this, they thing that it’s always very easy to be configurable and reusable – loosely coupled with a specific product.
Decisions are the drivers of our path through any attempt, even if they are technical decisions concerning the development of software solutions. Unfortunately, decisions in such cases are taken by the service provider’s managers because they are considered to be higher in hierarchy than senior developers or even architects. The reason is that between the roles of a manager are also some financial responsibilities.
Technical debt may be derived from technical decisions and/or financial decisions combined. Technical debt means that a project is technically inefficient because it costs too much to maintain and to evolve. There are technical and financial terms together in the same sentence (isn’t it great?).
And here comes the quality of the software. Definitely, a software solution or a software provider with a technical debt is not considered to be able to offer high quality of products/services. Wrong or biased decisions in financial and/or in technical aspects lead to low quality of products/services. And why is it called a technical debt? Because someone is called to deal with the defective product/service, working overtime, sacrificing weekends and over-negotiating to convince clients. The product/service by itself is hardly making money for the investor and his interest. The service provider is pushed to work extra for his profits. This extra work may be offered by the technical team, or by the managers’ team: technical team can work overtime for months in order to improve the quality, or the managers can be super-convincing negotiators and they can guarantee sustainability of a low quality product/service.

Conclusion.

Finally, I have to mention that hard problems need good and experienced solvers.
A system that has to be built from scratch with well written, frozen functional specifications, defining solid and clear functionality is the perfect task for junior developers. Of course we need some senior developers and an architect at least for any serious software solution delivery, but the majority of the team members should be, and can be, junior developers.
Senior developers are better and more appropriate for solving difficult problems. When the software is a mess, and the deadlines are strict, a developer should avoid a lot of pitfalls and face some challenges. Experienced developers, with strong problem solving abilities are more than suitable then. That’s why senior developers get paid better. In other words, technical debt is a decisions problem that immerse sooner or later in every software service provider. And the key to resolve this problem is to have capable and experienced members in the working team.
In a world with perfectly defined and frozen functional specifications, with frameworks and platforms that are fully compatible with each other, senior developers would be a luxury, and technical debt would be an artificial literary term.