Wednesday, March 26, 2014

saavn usability





I am fan of this bollywood music service Saavn and I hear the same song whole day and sometime same song for days (latest one I am hearing is "Harry is not a bramhchari") as after a while the song doesnt matter and you can get into the coding gear.

Sometimes little little things on usability matter, I think they need to work on tooltips.  I wanted to hear the same song in loop and I see no button  for repeat and the green icon now looks intuitive that its a repeat icon once I have figured it out but initially it was not that obvious. So I did Inspect in firefox to figure out what is repeat button and what is shuffle.




So if you have just an icon toolbar you need to have tooltips for each icon. I just checked our storage file listing toolbar and luckily it has tool tips so the UX team has done a good job ;).

Tuesday, March 18, 2014

Finally uploading to S3 using java sdk

One of the reasons I joined my employer was that I would get some experience with AWS and S3. But so far the ride is a roller coaster ride and I have been working on something totally different and first we created our own S3 like system and then uploading to S3 and AWS was handled by another rockstar friend in the team :).  Now it almost seems like I wont get a chance to work on it so I was like let me anyway write a simple program, so I signed up for AWS and tried this. Its a simple thing and took me only 2-3 hours but its worth spending it because for past 3 weeks I am working on cleaning technical debt accumulated over years and that doesnt give that much joy. Doing this give me some joy.

On that note one of the team mate shared this today to me

http://www.businessinsider.com/syndromes-drive-coders-crazy-2014-3



public class S3StorageService {
    private static final AppLogger logger = AppLogger.getLogger(S3StorageService.class);
    private String bucketName;
    private String accessId;
    private String secretKey;
    private String tempFolder;

    @Override
    public String uploadObject(String guid, InputStream in) {
        File file = getTempStoragePath(guid);
        try {
            saveToTempFile(in, file);
            String key = getObjectKey(guid);
            AmazonS3 s3Client = getS3Client();
            s3Client.putObject(new PutObjectRequest(bucketName, key, file));
            return key;
        } finally {
            FileUtils.deleteQuietly(file);
        }
    }
    private File getTempStoragePath(String guid) {
        File baseDir = new File(tempFolder);
        File hashedPath = new File(new File(new File(baseDir, guid.substring(0, 2)), guid.substring(2, 4)),
                guid.substring(4, 6));
        hashedPath.mkdirs();
        return new File(hashedPath, guid);
    }
    @Override
    public void deleteObject(String guid) {

        String key = getObjectKey(guid);
        AmazonS3 s3Client = getS3Client();
        s3Client.deleteObject(new DeleteObjectRequest(bucketName, key));
    }

    @Override
    public void downloadObject(String guid, OutputStream out) throws StorageServiceException {
        String key = getObjectKey(guid);
        AmazonS3 s3Client = getS3Client();
        S3Object object = s3Client.getObject(new GetObjectRequest(bucketName, key));
        InputStream objectData = null;
        try {
            objectData = object.getObjectContent();
            IOUtils.copy(objectData, out);
        } catch (IOException e) {
            throw new StorageServiceException(e);
        } finally {
            StreamUtils.closeQuietly(objectData);
        }
    }
    private String getObjectKey(String guid) {
        return  "Files/" + guid;
    }

    private AmazonS3 getS3Client() {
        AWSCredentials awsCredentials = new BasicAWSCredentials(accessId, secretKey);
        AmazonS3 s3Client = new AmazonS3Client(awsCredentials);
        return s3Client;
    }
}

Monday, March 17, 2014

nginx install godday cert

again on same weekend looks like the cert on his dev site expired and the rest api tests were failing to curl due to cert issue so he gave me the cert files that he got from godday and asked to install it. This took 5 min but it was cool. What was complex was godday would give you a cert and an intermediate cert but nginx would only accept one file. so the way you do is

cat your_godday.crt> your_final_chained.crt
cat gd_bundle.crt>>your_final_chained.crt

thats it you got your chained cert and all you need to do is specify this in nginx

    ssl_certificate /etc/nginx/xxx/your_final_chained.crt;
    ssl_certificate_key /etc/nginx/xxx/your_cert.key;



Looks like this is cert expiry week, even our code signing cert is expiring this weekend so we would sign our java applet jars again and deploy a hotpatch this weekend, will update that also in the post.

Mirroring a svn repo

it seems very easy, just follow https://help.cloudforge.com/entries/22483722-Creating-an-SVN-backup-with-svnsync and install a crontab and you are done.

jenkins aggregating downstream build results and how a "-" can be PITA

You would want to reuse common piece of jenkins builds to use DRY. for e.g. if you have multiple instances like dev, psr, uat and a deploy target for each instance or you have installer targets per branch. Then only thing that is different is the instance where you are deploying or the branch that you are building respectively. Two weekends before I ran into an issue where I was creating a target to auto deploy a build on an instance and problem was I had following build targets

UAT -->AnyDeploy->Run-Tests
QA->AnyDeploy->Run-Tests
Dev->AnyDeploy->Run-Tests

So I had AnyDeploy and Run-Tests as shared targets that I was triggering using parameterized builds now I wanted to archive test results on top level targets(UAT,QA,Dev). Now jenkins already has an option to aggregate downstream test results but problem was that my friend had written some cryptic test runner and junit formatter that was printing results in some non standard format. So I installed copy artifact plugin and at end of build I was copying last build artifact back but we ran into interesting issues because

UAT tests would run full suite and take 10 min but dev suite will run mini suite and finish in 1 min so if someone triggered UAT test and then dev then dev would finish first and you would get wrong results attached to UAT build when it finishes. So he asked my help to fix it.  This was a nice problem and took me 2 hours but was worth it.

so jenkins has a way to give you the triggered build number of a downstream target in format of $TRIGGERED_BUILD_NUMBER_XXX where XXX is the build target name. It sounded like a piece of cake to configure the copy artifact target like this and get the job done but this was not working.



Finally after struggling for 2 hours with him I found out that it was this "-" that was causing issue.  I renamed the jenkins build to Run_Tests and that immediately solved the issue.

Saturday, March 1, 2014

nodejs push java redis

Disclaimer: The code below is just a general guidelines and I am not pasting full code so you would have to fill in the blanks if you want to use it.

Continuing http://neopatel.blogspot.com/2014/02/redis-publish-subscribe-to-nodejs.html  finally I was able to write the nodejs app that would listen to events from redis and publish message to browser.  The architecture looks like



1) Your browser will use SockJS javascript library to connect to the nodejs server and it would then listen to events. The normal html code looks like

<script src="http://cdn.sockjs.org/sockjs-0.3.min.js"></script>
<script type="text/javascript">
    isOpen = false;
    var newConn = function(){
        var sockjs_url = 'https://xxx.com/push';
        var sockjs = new SockJS(sockjs_url);
        sockjs.onopen = function(){
            sockjs.send(JSON.stringify({"userName": "kpatel@acme.com", "sessionId": "e1930358-87d2-4d2b-86b6-2b2373acfaf1"}));
            isOpen = true;           
            console.log('Connected.');
        };
        sockjs.onmessage = function(e){
            console.log("Got message:" + e);           
            if(e.data == 'fileSystemEvent'){
                si.call = si.fn();
            }
        };
        sockjs.onclose = function(){
       
            var recInterval = setInterval(function(){
                    newConn();
                    if(isOpen) clearInterval(recInterval);
                }
                , 15000
            );
            console.log('Closed.');
            console.log('Connecting...');
        };
    };
    newConn();
</script>

2) Now on server first thing you need is a c10K server like nginx or haproxy.
I had to install nginx1.4.3 because the default that came with sudo apt-get didnt had the websocket support.
I just used this to install nginx 1.4.3
wget http://nginx.org/download/nginx-1.4.3.tar.gz
tar xvzf nginx-1.4.3.tar.gz
cd nginx-1.4.3

./configure \
--user=nginx                          \
--group=nginx                         \
--prefix=/etc/nginx                   \
--sbin-path=/usr/sbin/nginx           \
--conf-path=/etc/nginx/nginx.conf     \
--pid-path=/var/run/nginx.pid         \
--lock-path=/var/run/nginx.lock       \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-http_gzip_static_module        \
--with-http_stub_status_module        \
--with-http_ssl_module                \
--with-pcre                           \
--with-file-aio                       \
--with-http_realip_module             \
--without-http_scgi_module            \
--without-http_uwsgi_module           \
--without-http_fastcgi_module        
make
sudo make install

3) Then I had to add the below upgrade headers to make nginx upgrade the http socket to websocket

    location /push {
            proxy_pass              http://localhost:8090;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header        Host $http_host;
        # WebSocket support (nginx 1.4)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

    } 

4) I started the nodejs server on 8090 port and tomcat was on 8080. Now bare bones nodejs wont give you everything you need so you need several packages. I installed all of them using

sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs
sudo npm install -g redis sockjs socket.io winston



a) I used winston for logging like log4j is in java
b) redis package to talk to redis
c) sockjs for sockjs connections code

I just installed the modules globally as this is a scratch app

5) Inside nodejs app when a connection comes the client will pass username and sessionid. so nodejs needs to validate that sessionid with tomcat and then add it to a local map. Then when it receives the redis event it needs to just write it to the socket associated with the username or reject it if there is no open connection to the user.

var sockjs  = require('sockjs');
var http    = require('http');
var redis   = require('redis');

//    Store all connected sessions
var sessionMap = {};

//    Create redis client
var redisClient = redis.createClient(6379, '127.0.0.1');

//    Create sockjs server
var sockjsServer = sockjs.createServer();

// Sockjs server

sockjsServer.on('connection', function(conn) {
    conn.on('data', function(message){
        var data = JSON.parse(message || "{}");
        //    call function to check if current user is exist in db
        result = validateSession(data.userName, data.sessionId, function(response){
            logger.info("Got response" + response);
            if(response.success === true){
                addConnectionToSessionMap(conn,data.sessionId,data.userName);
                conn.write(JSON.stringify({success:true, statusCode:response.statusCode}));
            } else {
                logger.info("Got error: " + response.message);
                conn.write(JSON.stringify({success:false, statusCode:response.statusCode}));
                conn.end();
            }
        });
    });
    conn.on('close', function(){
        //    remove connection from Maps if connection is closed
        if(connMap[conn.id]){
            removeConnection(conn, '');
            conn.end();
        }
        logger.info("Closing...");
    });
});

redisClient.subscribe(config.redis_push_channel);

//    Push incoming message to all connected users
redisClient.on("message", function(channel, message){
    var data = JSON.parse(message || "{}");
    //figure out the connections to write message based on users and write message to them
    //conn.write(data)
});


//    Create http server
var server = http.createServer();
//    Hook sockjs in to http server
sockjsServer.installHandlers(server, {prefix:'/push'});   
server.listen(8090);

6) But coming from java world nodejs seemed fragile as it would just kill itself on every exception
so that when I added logging and in winston I added "exitOnError: false"

var winston = require('winston');

var logger = new (winston.Logger)({
  transports: [
    new winston.transports.File({ filename: __dirname + '/logs/push.log', json: false })
  ],
  exceptionHandlers: [
    new winston.transports.File({ filename: __dirname + '/logs/push.log', json: false })
  ],
  exitOnError: false
});

7) also nodejs wont have usual startup/stop things like tomcat has so you need to cook up your own something like

export NODE_PATH=/usr/lib/node_modules
nohup node push_server.js 1>>"nohup.out" 2>&1 &
echo $! > push.pid"

In short I am still excited for nodejs because if your startup has JS skills then JS developers can quickly start pitching into nodejs code and write server code.

Also being event model it scales pretty fine similar to the way c10k servers like nginx and haproxy scales. To me bigger learning was event based programming where you just write what happens on what event and let the framework take care of things. Good thing was that the browser has a live connection and I can publish message to redis when a file is added and in browser console logs for all users I can see the event instantly :).