Tuesday, March 24, 2015

Rollback a specific commit and replace with the correct one - in git

For a specific task I should delete some files, move some others and update some more.
I committed something that was not completely correct (commit A). I needed to delete that commit and create a new one, to replace the old commit.
My options were:

1. to amend the commit (commit A) by making some corrections
(commit Y) <=================================== master
   |
   \
    -(commit X)-(commit A) <================ my-branch-name

2. to create a reverse commit and create a new correct commit (results in multiple commits)
(commit Y) <=================================== master
   |
   \
    -(commit X)-(commit A)-(reverse commit A)-(commit B) <== my-branch-name

3. to reset to the commit X, create a correct commit (commit B) on it, and then reset the HEAD of my-branch-name to that commit.
(commit Y) <=================================== master
   |
   |                     (commit A)
   \
    -(commit X)-(commit B) <================ my-branch-name

Option 1: amend previous commit

Option 1 would be as follows:
On top of the erroneous "commit A" I should provide all the corrective actions, stage them, and then amend the previous commit:
    git commit --amend

Option 2: create reverse commit

Option 2 has the below defects:
        * There were multiple deleted and moved files. It wouldn't be optimal to create a reverse commit and reapply my changes, since the commits should be squashed later.
        * The final changes should be included in a single commit, not multiple.

Option 3: reset to a specific commit

Option 3 is the most elegant and appropriate solution, especially when the changes are created by a script.
E.g. on the first run, the input of the script caused some changes and the commit A has been created. After that, new entries for the input of the script are coming. Then instead of finding which changes are not yet committed with commit A, it's better to reset to the previous commit, and execute again the script with the full new input. This will create a new complete set of changes that can be included in a single commit.
Execution of the script created changes A, B, D, F and G. All these changes are included in a new commit: "commit A". "Commit A" is the HEAD of my-branch-name.
New extra input to the script creates changes A, B, C, D, E, F and G. Imagine that this list could include thousands of entries. It's not easy to find the differences between the lists of changes and amend the "commit A".

To remove from the history my last erroneous commit (commit A), I need to reset the current branch to one commit backwards:
        git reset --hard HEAD~1

Perform the correct set of changes, and commit (commit B)
        git commit -m "comment for commit B"

To reset the current branch HEAD to the correct "commit B", that is not shown in git log, I can find the sha1 id of that particular commit executing:
        git reflog
       
And to reset the HEAD of my-branch-name to that commit I execute:
        git reset --hard <commit-B-sha1>

Tuesday, December 9, 2014

Connecting your codenvy development environment with a bitbucket git repository

Intro

Find below step-by-step information on how to connect your codenvy development environment with a bitbucket repository using SSH with a SSH key generated by codenvy.

1. Codenvy (generate SSH key)

Having a project open -> window -> Preferences -> SSH keystore -> Generate key -> the aplication asks for a url: give bitbucket.org
It generates a new line in the list of SSH keystore with the host label bitbucket.org (as you gave it)
Press the view button of that line, and the SSH key will be presented to you.
Copy all the content of the message box - including the initial 'ssh-rsa' string and the final 'email@address.com'.

2. Bitbucket.org (add the key to your account)

Account -> SSH keys -> Add key
On the popup window type 'bitbucket' in the label text field, and in the key field paste the generated key from codenvy.

3. Codenvy (add a remote git repository to your project)

Git -> Remotes -> Remotes -> The remote repositories message box includes the already declared remote repositories.
Press add, and in the pop-up window, give the name 'bitbucket' in the name field and type: ssh://git@altssh.bitbucket.org:443/<accountname>/<reponame>/ as described in this page: https://confluence.atlassian.com/display/BITBUCKET/Use+the+SSH+protocol+with+Bitbucket
Where accountname is your account name in bitbucket, and reponame is your repository name in bitbucket.

Friday, September 26, 2014

Intel Edison on my hands

I managed to order the new intel embeddable SD card sized computer from sparkfun.
https://www.sparkfun.com/products/13024

I hope I 'm between the first ones who got it in Greece. It cost 40 euros as a product, 18 euros for postage from US, and finally 46 euros for complete import customs.


Morse code LED - light emitting diode - with Raspberry pi

More than interested in embedded platforms and microcontrollers, I acquired my 2 Raspberry pis full of enthusiasm, waiting at the end of long priority queues.
I downloaded some linux SD card images, I played with Rasbian and XBMC.
My enthusiasm faded out as I had not enough time to tinker more.
In the first months of this year (2014) I found out that Oracle was offering a training exactly on my needs. Using my preferred programming language, Java, a Raspberry pi, and some additional electronic components, a cutting edge project will educate as about embedded computing. This is the perfect chance for me to learn some things I always wanted to.
Unfortunately, I couldn't manage to follow the training, due to other running things.
But, I enjoyed their decision in Oracle to offer the material open to the public with unlimited access. Thank you guys!

Working with the training videos, I found out:
  • how a breadboard works
  • what are these breakout circuits
  • several configuration issues on linux
  • programming I/O pins of Raspberry pi using Java ME API
The most helpful tutorial on configuring the development environment with Raspberry pi and Rasbian is found here: http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/RaspberryPi_Setup/RaspberryPi_Setup.html. I found this link on the supporting material of the training course.
It's great! And it's even easier than I thought it could be.
The first hands on project was to blink a LED driven by the Raspberry pi. I found that it may be too easy, and I thought that it would be far more interesting to implement a morse code generator.
Given a message, the machine should blink the LED with Dits and Dahs, and the result would be a correctly transmitted message translated into morse code.

I found the morse code rules at: http://www.colorpilot.com/morse.html

The code for the MIDLet is as follows:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import javax.microedition.midlet.MIDlet;
/**
 *
 * @author stavros
 */
public class TestLed extends MIDlet {
   
    MorseDevice dev;
   
    @Override
    public void startApp() {
        String message = "SOS";
       
        System.out.println("Starting playing message: " + message);
        dev = new MorseDevice();
        dev.setMessage(message);
        dev.start();
    }
   
    @Override
    public void destroyApp(boolean unconditional) {
        System.out.println("Finished playing message...");
    }
}

The code for the morse machine is as follows:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
import java.util.Map;

/**
 *
 * @author stavros
 */
public class MorseDevice extends Thread {
  
    private final static int UNITMS=200;
  
    private GPIOLed led;
  
    private String message;
    public void setMessage(String message) {
        this.message = message;
    }
  
    private Map<String, MorseCode[]> map = MorseCode.getMap();

    @Override
    public void run() {
        led = new GPIOLed(18);
      
        for (int i=0; i<message.length(); i++) {
            char character = message.charAt(i);
          
            if (character == ' ') {
                // space has been handled
            }
            else {
                MorseCode[] code = map.get(String.valueOf(character));
                play(code);

                if (i < message.length()-1) {
                    char nextchar = message.charAt(i+1);
                    if (nextchar == ' ') {
                        wordpause();
                    }
                    else {
                        letterpause();
                    }
                }
            }
        }
    }
  
    private void play(MorseCode[] code) {
        for (MorseCode c: code) {
            this.led.on();
            delay(c);
            this.led.off();
            delay(UNITMS/20);
        }
    }
  
    private void wordpause() {
        delay(7*UNITMS);
    }
  
    private void letterpause() {
        delay(3*UNITMS);
    }
  
    private void delay(MorseCode mc) {
        if (MorseCode.DIT.equals(mc)) {
            delay(UNITMS);
        }
        else if (MorseCode.DAH.equals(mc)) {
            delay(3*UNITMS);
        }
    }
  
    private void delay(int milliseconds) {
        try {
            sleep(milliseconds);
        }
        catch(InterruptedException ie) {
            System.out.println(ie.getMessage());
        }
    }
  
}

The morse code encoding class is as follows:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
import java.util.HashMap;
import java.util.Map;
/**
 *
 * @author stavros
 */
public enum MorseCode {
    DIT,
    DAH;
   
    public static Map<String,MorseCode[]> getMap() {
        Map<String, MorseCode[]> map = new HashMap<>();
       
        map.put("A", new MorseCode[]{DIT, DAH});
        map.put("B", new MorseCode[]{DAH, DIT, DIT, DIT});
        map.put("C", new MorseCode[]{DAH, DIT, DAH, DIT});
        map.put("D", new MorseCode[]{DAH, DIT, DIT});
        map.put("E", new MorseCode[]{DIT});
        map.put("F", new MorseCode[]{DIT, DIT, DAH, DIT});
        map.put("G", new MorseCode[]{DAH, DAH, DIT});
        map.put("H", new MorseCode[]{DIT, DIT, DIT, DIT});
        map.put("I", new MorseCode[]{DIT, DIT});
        map.put("J", new MorseCode[]{DIT, DAH, DAH, DAH});
        map.put("K", new MorseCode[]{DAH, DIT, DAH});
        map.put("L", new MorseCode[]{DIT, DAH, DIT, DIT});
        map.put("M", new MorseCode[]{DAH, DAH});
        map.put("N", new MorseCode[]{DAH, DIT});
        map.put("O", new MorseCode[]{DAH, DAH, DAH});
        map.put("P", new MorseCode[]{DIT, DAH, DAH, DIT});
        map.put("Q", new MorseCode[]{DAH, DAH, DIT, DAH});
        map.put("R", new MorseCode[]{DIT, DAH, DIT});
        map.put("S", new MorseCode[]{DIT, DIT, DIT});
        map.put("T", new MorseCode[]{DAH});
        map.put("U", new MorseCode[]{DIT, DIT, DAH});
        map.put("V", new MorseCode[]{DIT, DIT, DIT, DAH});
        map.put("W", new MorseCode[]{DIT, DAH, DAH});
        map.put("X", new MorseCode[]{DAH, DIT, DIT, DAH});
        map.put("Y", new MorseCode[]{DAH, DIT, DAH, DAH});
        map.put("Z", new MorseCode[]{DAH, DAH, DIT, DIT});
       
        map.put("0", new MorseCode[]{DAH, DAH, DAH, DAH, DAH});
        map.put("1", new MorseCode[]{DIT, DAH, DAH, DAH, DAH});
        map.put("2", new MorseCode[]{DIT, DIT, DAH, DAH, DAH});
        map.put("3", new MorseCode[]{DIT, DIT, DIT, DAH, DAH});
        map.put("4", new MorseCode[]{DIT, DIT, DIT, DIT, DAH});
        map.put("5", new MorseCode[]{DIT, DIT, DIT, DIT, DIT});
        map.put("6", new MorseCode[]{DAH, DIT, DIT, DIT, DIT});
        map.put("7", new MorseCode[]{DAH, DAH, DIT, DIT, DIT});
        map.put("8", new MorseCode[]{DAH, DAH, DAH, DIT, DIT});
        map.put("9", new MorseCode[]{DAH, DAH, DAH, DAH, DIT});
       
        return map;
    }
   
}

The code for the LED is as follows:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;
import jdk.dio.DeviceManager;
import jdk.dio.gpio.GPIOPin;
/**
 *
 * @author stavros
 */
public class GPIOLed {
   
    private GPIOPin pin;
   
    public GPIOLed(final int pin) {
        try {
            this.pin = DeviceManager.open(pin);
        }
        catch(Exception ioe) {
            System.out.println(ioe.getMessage());
            this.pin = null;
        }
    }
   
    public void on() {
        try {
            this.pin.setValue(true);
        } catch (IOException ex) {
            System.out.println(ex.getMessage());
        }
    }
   
    public void off() {
        try {
            this.pin.setValue(false);
        } catch (IOException ex) {
            System.out.println(ex.getMessage());
        }
    }
   
}

Finally, this is a picture of the breadboard connected with the raspberry and the green light:



Wednesday, June 12, 2013

Book review: Hadoop Real-world solutions cookbook (Packt)



Introduction

Hadoop is a cutting edge tool and everybody in the software industry wants to know about it. Initially we learn that large amounts of data coming from web 2.0 applications and social media, are rich of valuable raw data. And then, a quest for the best processing tools begins.
NoSQL movement is highly connected with the big data technologies, and their evolution appeared to be remarkable. Hundreds of new persistence solutions and frameworks have been released. Some of them offering high quality and some just being very well advertised. All of them are offering, in short, the advantages of: being easily scalable, giving great speed of random access, storing more intuitive structures that need less time for mapping programmatically.
World leading technology companies participated in promoting development of these technologies and one of the most popular algorithms developed is MapReduce, and Hadoop managed to be one of its mainstream implementations.
Ok, we learned the basics, and now we have some production applications to implement and maintain. Of course we will use different data sources, including text files, relational database servers and NoSQL clusters. And of course there is a large variety of useful tools out there, to choose from. To begin with, we need to decide which tools to learn first, which are the most appropriate for our case, and how exactly to solve possible problems.
Hadoop Real-world solutions cookbook by Jonathan R. Owens, Jon Lentz and Brian Femiano, is a book that does what it promises; it offers recipes of real world working solutions using Hadoop alone, or in well collaborating systems with supplementary open source tools. The recipes are organized in 10 chapters, and every chapter is separated into several sections. Each section has the format of a “how to” article with preparation steps, execution steps and explanation text. Code examples are extensive, and they are available for download along with sample datasets used throughout the book (after registration in Packt publications support site).

 

Chapters 1-2

We learn that there are command line tools helping to import our data files in the magical Hadoop file system (HDFS). And if we have some relational data in a relational database server, we can export and import them using an open source tool called sqoop, in collaboration with JDBC. The most advanced recipes include real-time access of HDFS files from Greenplum, as external tables, and importing data to HDFS from special data sources using Flume. Next, we learn how to compress our data in HDFS, and using different serialization techniques (Avro, Thrift and Protocol Buffers).

 

Chapters 3-4

Some great traffic statistics and analytics can be created and processed using MapReduce in Hadoop processing environment. The recipes in these 2 chapters explain how apache web server log files can be processed, mainly using Pig and Hive, in order to extract very useful information as session logging, page view calculations and geographical event data. Moreover, there are recipes that explain how log files can be mapped as external tables, and proposed recipes for using effectively other external data sources, as news archives.

 

Chapter 5

A whole chapter is dedicated to the concept of joining datasets. There are recipes giving example of replicated join, merge join and skewed join, mainly using Pig. Additionally, more advanced techniques are presented, for full outer joins, increasing performance, using Hive and Redis key-value store.

 

Chapter 6-7

In these two chapters, big data analysis is the main concern of the recipes. Initially, simple MapReduce recipes using Pig and Hive are presented, to process large amounts of data and derive complex pieces of information. Timely sorted aggregated data, distinct values of a variable from very large sets, similarity of data records and outliers discovery in a time series. For better facing this kind of problems, the author suggests graph processing technologies and machine learning algorithms, so, chapter 7 presents recipes using Apache Giraph and Mahout in collaboration with Hadoop.

 

Chapter 8

Chapter 8 is dedicated to debugging. Of course a lot of testing should be performed on every aspect of any distributed MapReduce solution. For this reason, Hadoop offers Counters mechanism that exposes the internals of every map and reduce phases of jobs, in a very practical and user friendly format. Furthermore, a unit testing framework called MRUnit is presented, with the basic features of a testing framework, but for map and reduce phases. Going one step further, the author presents a recipe for generating test data with a very powerful Pig tool called illustrate. And finally, a recipe is addressed to running MapReduce in local mode, for development reasons, enabling local debuggers from within the IDE.

 

Chapter 9

Chapter 9 is dedicated to administrative tasks. These recipes explain how distributed mode is configured in Hadoop systems, how to add or remove nodes on a cluster, how to monitor the health of the cluster, and finally some tuning tips are provided.

 

Chapter 10

In the final chapter, the author suggests Apache Accumulo for persistence layer. Inspired from Google’s BigTable, Apache Accumulo has many unique features as iterators, combiners, scan authorizations and constraints. In combination with MapReduce, example recipes present loading and reading data from Accumulo tables.

 

Conclusion

Overall, this is a recipes based cookbook, and as such, it contains task driven sections and chapters. This is not a book that may be read from the beginning to the end, but it would be better to be used as a reference. In other words, not all of these recipes are appropriate for every reader. The reader being experienced enough, can execute the recipes -that sometimes include downloading tools source code from github- and use this cookbook to select certain tools and solutions. Finally, I would like to note that the recipes are addressed to all the range of IT professionals: developers, dev ops and architects, and I think that the better way to use it is as an asset of a development team, or a guide for experienced developers planning “one man show” startups.

Saturday, January 26, 2013

My opinion on technical debt

Definition of debt.

Speaking for a debt, we have to know that there are two participating parties: The debtor and the creditor. The debtor has an idea, and needs some assets to realize it. These assets are not immediately available to the debtor, but they are estimated to be available to him in a computed amount of time. Here is the point where the creditor enters the game. The creditor has immediately available these assets that are needed by the debtor, and he wants to sell them to him. Of course, the transaction cannot be completed once-off, so the two parties agree on an arrangement. The motivation of the creditor to start with this arrangement is the interest that will be derived. This is an investment for the creditor, and a bet for the debtor.
The debtor is now facing the challenge that he should succeed with his idea, so, he can return the value of the assets plus the interest, and of course, he should be able to make a profit for his own effort.
Metaphorically, everyone has an idea in any time of his life, and he always has to make decisions while realizing it. Right decisions are proven to be those that result to a profitable outcome. Profitable outcome of course means that the returning value of the assets and their interests are safe, being a wide variety of possible things, like ethics. Good ethics mean there is trust of the environment to the debtor, and the debtor can count on it in the future.

Software development economy.

Software development is usually driven by contracts between clients and software houses (software service providers). There is a system of managers, marketing teams and accountants etc. except from the technical team, participating to a project. And finally there are governments with tax returns. Both clients and service providers are obliged to return taxes while working for the success of their projects.
So, inevitably, in a system that every process is translated into money (salaries, contracts values, profits, taxes), decisions are driven by money also.
A client asks for a specific (the best he can do) solution to some problems, and he is making an investment, in order to make some profit. The software service provider offers the solution for a profitable price. When the contract is signed, the deadlines are the most important effects of the project for the client.

Parallelism of technical debt in software development.

Technical debt is of course a matter of work ethics. But it is not only that.
When a new contract is signed, a war starts at the premises of the software service provider. The contract may already have been evaluated, but in many cases only pre-sales department is familiar with it. Technical teams evaluate the project in order to estimate if the deadlines are feasible. I remind you that the contract is already signed, and of course it contains some deadlines, but technical teams hardly know that. Managers brainstorming team will evaluate technical estimations and they are responsible for the final agreement of the time-plan. The best manager will have the best idea: the new software solution may be a compilation of older legacy projects that have already been tested on production environments, so, there is a considerable reduction of implementation and testing efforts. Integration implementation and testing is usually a hidden cost to everyone, except from the developers. Managers’ favorite word in software houses are modules. And because they call them like this, they thing that it’s always very easy to be configurable and reusable – loosely coupled with a specific product.
Decisions are the drivers of our path through any attempt, even if they are technical decisions concerning the development of software solutions. Unfortunately, decisions in such cases are taken by the service provider’s managers because they are considered to be higher in hierarchy than senior developers or even architects. The reason is that between the roles of a manager are also some financial responsibilities.
Technical debt may be derived from technical decisions and/or financial decisions combined. Technical debt means that a project is technically inefficient because it costs too much to maintain and to evolve. There are technical and financial terms together in the same sentence (isn’t it great?).
And here comes the quality of the software. Definitely, a software solution or a software provider with a technical debt is not considered to be able to offer high quality of products/services. Wrong or biased decisions in financial and/or in technical aspects lead to low quality of products/services. And why is it called a technical debt? Because someone is called to deal with the defective product/service, working overtime, sacrificing weekends and over-negotiating to convince clients. The product/service by itself is hardly making money for the investor and his interest. The service provider is pushed to work extra for his profits. This extra work may be offered by the technical team, or by the managers’ team: technical team can work overtime for months in order to improve the quality, or the managers can be super-convincing negotiators and they can guarantee sustainability of a low quality product/service.

Conclusion.

Finally, I have to mention that hard problems need good and experienced solvers.
A system that has to be built from scratch with well written, frozen functional specifications, defining solid and clear functionality is the perfect task for junior developers. Of course we need some senior developers and an architect at least for any serious software solution delivery, but the majority of the team members should be, and can be, junior developers.
Senior developers are better and more appropriate for solving difficult problems. When the software is a mess, and the deadlines are strict, a developer should avoid a lot of pitfalls and face some challenges. Experienced developers, with strong problem solving abilities are more than suitable then. That’s why senior developers get paid better. In other words, technical debt is a decisions problem that immerse sooner or later in every software service provider. And the key to resolve this problem is to have capable and experienced members in the working team.
In a world with perfectly defined and frozen functional specifications, with frameworks and platforms that are fully compatible with each other, senior developers would be a luxury, and technical debt would be an artificial literary term.