Ubuntu fix Shellshock bug

Ubuntu fix Shellshock Bug :

Test if your system is vulnerable,

$ env X="() { :;}; echo shellshock" `which bash` -c "echo completed"
shellshock
completed

If you got “shellshock” in output then it is vulnerable.

Bash has functions, though in a somewhat limited implementation, and it is possible to put these Bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable).

Download package as per your OS version from fix
For 12.04 64bit LTS,

#wget https://launchpad.net/~ubuntu-security/+archive/ubuntu/ppa/+build/6400987/+files/bash_4.2-2ubuntu2.2_amd64.deb
#dpkg -i /home/hemant/Downloads/bash_4.2-2ubuntu2.2_amd64.deb

The patch used to fix this flaw, ensures that no code is allowed after the end of a Bash function.
So if you run the above example with the patched version of Bash, you should get an output similar to:

$ env X="() { :;}; echo shellshock" `which bash` -c "echo completed"
/bin/bash: warning: X: ignoring function definition attempt
/bin/bash: error importing function definition for `X'
completed

Now, you are safe from Shockshell bug.

For details can be found at,

Redhat

NVD

Thank you.

Posted in Uncategorized | Leave a comment

PHP multiprocessing using fork

PHP multiprocessing using fork:

Process Control support in PHP implements the Unix style of process creation, program execution, signal handling and process termination.

Find more details at PHP fork

The pcntl_fork() function creates a child process that differs from the parent process only in its PID and PPID.
On success, the PID of the child process is returned in the parent’s thread of execution, and a 0 is returned in the child’s thread of execution.
On failure, a -1 will be returned in the parent’s context, no child process will be created, and a PHP error is raised.
The pcntl_wait($status) function protect against zombie children.

PHP fork is useful when we want process multiple task parallely.

Let’s assume we have following task to perform.
1) updateCache : Time 3s.
1) sentEmail : Time 2s.
1) logEntry : Time 1s.
1) addDBEntry : Time 5s.

If we perform above task in serially the it will take 10 sec(5+3+2+1).
But, if we use parallel processing then it will reduce to 5 sec(largest time among them) using wait to protect against zombie children without returning immediately.

You can also return parent process and make child processes run in background as zombie.

$time_start = microtime(true);

function executeTask($types){

foreach($types as $type){
switch($type){
case 'updateCache':
sleep(3);
echo "updateCache \n";
break;

case 'sentEmail':
sleep(2);
echo "sentEmail \n";
break;

case 'logEntry':
sleep(1);
echo "logEntry \n";
break;

case 'addDBEntry':
sleep(5);
echo "addDBEntry \n";
break;
default:
break;
}
}
}

$useFork = true;

if($useFork){
$pid1 = pcntl_fork();
if ($pid1 == -1) {
$types = array("updateCache","sentEmail","logEntry","addDBEntry");
executeTask($types);
} else if ($pid1) {
$pid2 = pcntl_fork();
if ($pid2 == -1) {
$types = array("updateCache","sentEmail","logEntry");
executeTask($types);
} else if ($pid2) {
$types = array("updateCache");
executeTask($types);
} else {
$types = array("sentEmail","logEntry");
executeTask($types);
}
pcntl_wait($status);

} else {
$types = array("addDBEntry");
executeTask($types);
}

pcntl_wait($status);
if($pid1 && $pid2){
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Total time: $time seconds\n";
}
} else {
$types = array("updateCache","sentEmail","logEntry","addDBEntry");
executeTask($types);
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Total time: $time seconds\n";

}

Thank you.

Posted in Php | Leave a comment

Git branching model

Git branching model :

Following model can be used for feature and fixes.

Consideration:

Servers:
- Local Server
- Remote Server
- Production Server

Branches:
- Release branch : “release”
- Development branch : “master”
- Feature branch : “feature”

Local Server:

1)Check current status:

- a)List local branch.

$git branch

- b)List local and remote-tracking branches.

$git branch -a

- c)List local and all remote branches.

$git ls-remote --heads

- d)Check summary of Repository.

$git remote show origin

2)Pull latest updates

$git pull origin master;

3)Create branch: Create branch and make changes.


$git branch feature
$git checkout feature
or
$git checkout -b feature

Make changes.

4) Add and Commit changes :


$git add file_name
$git commit -m "message"

5)Add new remote branch: Push branch to origin server.

$git push -u origin feature

Development Server:

1)Fetch new branches: Fetch all remote branches.

$git fetch -all

2)Checkout new remote branch: Checkout and track new remote branch.

$git checkout -tb feature origin/feature

Test code.

Local Server:

1)Merge branch: Merge ‘feature’ branch to ‘master’ branch.


$git checkout master;
$git merge feature;

2)Make a feature release.


$git checkout release;
$git pull origin release;
$git merge master;
$git fetch --tags;
$git describe;
$git tag -a -m 'log message'
$git push origin release --tags
$git checkout master;

3)Delete branch: Delete new branch locally and remotely.


$git branch -D feature

$git push origin --delete feature
or
$git push origin :feature

3)Delete branch: Deletes all stale tracking branches under .
These stale branches have already been removed from the remote repository
referenced by , but are still locally available in “remotes/“.


git remote prune origin

When you use git push origin :feature, it automatically removes origin/feature, so when you ran git remote prune origin, you have pruned some branch that was removed by someone else.
It’s more likely that your co-workers now need to run git prune to get rid of branches you have removed.

Production Server:

1) Fetch and Checkout new tag.

$git fetch --tags
$git checkout

Note:Use “stash” commands to avoid conflicts for uncommitted changes.


$git stash
$git stash pop
$git stash apply

Thanks you.

Posted in Revision control | Tagged | Leave a comment

Reset mysql root password

Reset mysql root password :

If you set a root password previously, but have forgotten it, you can set a new password. The following sections provide instructions for Unix systems

1) Stop Mysql server.

#sudo /etc/init.d/mysql stop

OR

#sudo kill `sudo cat /var/run/mysqld/mysqld.pid`

2) Now lets start up the mysql daemon and skip the grant tables which store the passwords.

#sudo mysqld_safe --skip-grant-tables &

3) Now you should be able to connect to mysql without a password.

#mysql --user=root mysql

4) Update root password.


mysql>update user set Password=PASSWORD('rootnewpassword') where user='rootuser';

mysql>flush privileges;

mysql>exit;

5) Stop Mysql and Start it again.


#sudo kill `sudo cat /var/run/mysqld/mysqld.pid`

#sudo /etc/init.d/mysql start

Thank you.

Posted in Mysql | Tagged | Leave a comment

Install HP printer driver on Ubuntu

Install HP printer driver on Ubuntu:

Install using repository driver:

# sudo aptitude install hplip

Install using new updated driver:

Follow step on link given below,

HP driver Installation Wizard

if you got following error ,


error: A required dependency 'cups (CUPS - Common Unix Printing System)' is still missing.
error: Installation cannot continue without this dependency.
error: Please manually install this dependency and re-run this installer.

then install ,

# sudo aptitude install hplip

Above command will install dependency from repository.

Again start installation using “HP driver Installation Wizard”.

If you got message regarding “hplip already exist [remove and install | overwrite | quit ]” then say “remove and install”.

It will remove old version from repository and install your new downloaded version.

Start setup in GUI mode whenever asked.

Install HP-plugin from Hp site, if asked.

After completing setup, print test page and you are done.

Happy Printing !!!.

Posted in Uncategorized | Leave a comment

Shell script multi-processing

Shell script multi processing :

Multi processing using shell script can be achieve by spawning multiple child processes.
There are two method ,

1) Using commands as string :


#!/bin/bash

declare -a NOS=(1 2 3 5 6 7 8);

STR="{
echo "STR.{}";
}";

for NO in ${NOS[*]}; do echo $NO; done | xargs -P8 -n1 -I{} bash -c "$STR"

2) Exporting commands as function (prefered way) :


#!/bin/bash

fName() { echo "fName.$1"; }

export -f fName

for NO in ${NOS[*]}; do echo $NO; done | xargs -P8 -n1 -I{} bash -c "fName '{}'";

Things to keep in mind,
- You can’t use parent script’s variable in unless you pass them as parameter using xarg.
- You can’t use function from parent script or current shell unless you export it shell console.

3) Using parallel shell tool :
GNU parallel is a shell tool for executing jobs in parallel using one or more computers.
More details can be found here, Parallel Shell Tool

parallel --version

#!/bin/bash
FORK_PROC=$1;
l='log.out';
for i in {1..100} ; do
CMD="echo $i > $l";
CMD_COMPUTE_REPORT+=("$CMD");
done
processTask() { echo "IN => $1"; sleep 2; eval $1 || echo "Error:$LINENO"; }
export -f processTask # Export 'processTask' function to current Shell.
# Process Task parallely using xargs
time for CMD in "${CMD_COMPUTE_REPORT[@]}"; do echo "$CMD"; done | xargs -P$FORK_PROC -n1 -I{} bash -c "processTask '{}'";
# Process Task parallely using parallel
time parallel "processTask {}" ::: "${CMD_COMPUTE_REPORT[@]}"
exit 0;

Thank you.

Posted in Shell Script | Tagged | Leave a comment

Tech Giants Acquisition Strategies

Tech Giants Acquisition Strategies

Compare the acquisition strategies of 5 tech giants over the last 15 years,

Original content produced by simplybusiness.

Posted in Startups | Tagged | Leave a comment

Composer commands

Composer :

Composer is a tool for dependency management in PHP. It allows you to declare the dependent libraries your project needs and it will install them in your project for you.
For more information check official site getcomposer.

Packagist is the main Composer repository. It aggregates all sorts of PHP packages that are installable with Composer.

Composer Install :

- Install command checks if a lock file is present, and if it is, it downloads the versions specified there (regardless of what composer.json says ).If no composer.lock file exists, Composer will read the dependencies and versions from composer.json and create the lock file.

$php composer.phar install

Composer Update :

- This will fetch the latest matching versions (according to your composer.json file) and also update the lock file with the new version.

$php composer.phar update

Composer Update single vendor/package :

- If you only want to install or update one dependency.

$php composer.phar update "vendor-name/package-name"

1) New Checkout composer steps :

- Check/Create composer.json
- Run,
$php composer.phar install

2) Adding new bundle composer steps :

- Add new bundle entry to composer.json
- Run,
$php composer.phar update "vendor-name/package-name"
- Commit updated composer.json and composer.lock file to master repository.

3) Deleting existing bundle composer steps :

- Delete existing bundle entry to composer.json, app/AppKernel.php and app/config/config.yml files (Symfony Framework).
- Run,
$php composer.phar update "vendor-name/package-name"
- Commit updated composer.json, composer.lock, app/AppKernel.php and app/config/config.yml files to master repository.

4) Warning :
If you came across following message ,
“Warning: The lock file is not up to date with the latest changes in composer.json, you may be getting outdated dependencies, run update to update them.”
Don’t panic! This is what you should expect if you just have edited the composer.json file. For instance, if you add or update a detail like the library description, authors, extra parameters, or even put a trailing whitespace, this will change the md5sum of the file. Then Composer will warn you if this hash differs from the one stored in the composer.lock.
Run,

$php composer.phar update --lock

5) Deploying composer changes:

What you really need to do is deploy your updated composer.lock, and then re-run composer install. You should never run composer update in production. If however you deploy a new composer.lock with new dependencies and/or versions and then run composer install composer will update and install your new dependencies.

Thank you.

Posted in Open Source, Project management tools | Leave a comment

Setup Rabbitmq Clusters on Ubuntu

Setup Rabbitmq Clusters on Ubuntu :

A RabbitMQ broker is a logical grouping of one or several Erlang nodes, each running the RabbitMQ application and sharing users, virtual hosts, queues, exchanges, etc. Sometimes we refer to the collection of nodes as a cluster.

Lets try to setup and manipulate a RabbitMQ cluster across three machines – rabbit1, rabbit2, rabbit3

Erlang nodes use a cookie to determine whether they are allowed to communicate with each other – for two nodes to be able to communicate they must have the same cookie.

The cookie is just a string of alphanumeric characters. It can be as long or short as you like.

Erlang will automatically create a random cookie file when the RabbitMQ server starts up. This will be typically located in /var/lib/rabbitmq/.erlang.cookie on Unix systems and C:\Users\Current User\.erlang.cookie or C:\Documents and Settings\Current User\.erlang.cookie on Windows systems. The easiest way to proceed is to allow one node to create the file, and then copy it to all the other nodes in the cluster.

As an alternative, you can insert the option “-setcookie cookie” in the erl call in the rabbitmq-server and rabbitmqctl scripts.

First step is to start RabbitMQ on all nodes in the normal way:

#rabbit1$ rabbitmq-server -detached
#rabbit2$ rabbitmq-server -detached
#rabbit3$ rabbitmq-server -detached

Now , confirmed each node by the cluster_status command:

#rabbit1$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1]}]},{running_nodes,[rabbit@rabbit1]}]
...done.
#rabbit2$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit2]}]
...done.
#rabbit3$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3]}]
...done.

Creating the cluster:

In order to link up our three nodes in a cluster, we tell two of the nodes, say rabbit@rabbit2 and rabbit@rabbit3, to join the cluster of the third, say rabbit@rabbit1.

#rabbit2$ rabbitmqctl stop_app
Stopping node rabbit@rabbit2 ...done.
#rabbit2$ rabbitmqctl join_cluster --ram rabbit@rabbit1
Clustering node rabbit@rabbit2 with [rabbit@rabbit1] ...done.
#rabbit2$ rabbitmqctl start_app
Starting node rabbit@rabbit2 ...done.

Now Check cluster status again,

#rabbit1$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]},
{running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
...done.
#rabbit2$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]},
{running_nodes,[rabbit@rabbit1,rabbit@rabbit2]}]
...done.

Now we join rabbit@rabbit3 as a disk node to the same cluster.

#rabbit3$ rabbitmqctl stop_app
Stopping node rabbit@rabbit3 ...done.
#rabbit3$ rabbitmqctl join_cluster rabbit@rabbit2
Clustering node rabbit@rabbit3 with rabbit@rabbit2 ...done.
#rabbit3$ rabbitmqctl start_app
Starting node rabbit@rabbit3 ...done.

We can see that the three nodes are joined in a cluster,

#rabbit1$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit3]},{ram,[rabbit@rabbit2]}]},
{running_nodes,[rabbit@rabbit3,rabbit@rabbit2,rabbit@rabbit1]}]
...done.
#rabbit2$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit3]},{ram,[rabbit@rabbit2]}]},
{running_nodes,[rabbit@rabbit3,rabbit@rabbit1,rabbit@rabbit2]}]
...done.
#rabbit3$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit3,rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]},
{running_nodes,[rabbit@rabbit2,rabbit@rabbit1,rabbit@rabbit3]}]
...done.

Changing node types:

We can change the type of a node from ram to disk and vice versa.

#rabbit2$ rabbitmqctl stop_app
Stopping node rabbit@rabbit2 ...done.
#rabbit2$ rabbitmqctl change_cluster_node_type disc
Turning rabbit@rabbit2 into a disc node ...
...done.
Starting node rabbit@rabbit2 ...done.
#rabbit3$ rabbitmqctl stop_app
Stopping node rabbit@rabbit3 ...done.
#rabbit3$ rabbitmqctl change_cluster_node_type ram
Turning rabbit@rabbit3 into a ram node ...
#rabbit3$ rabbitmqctl start_app
Starting node rabbit@rabbit3 ...done.

Restarting cluster nodes :

Nodes that have been joined to a cluster can be stopped at any time.The nodes automatically “catch up” with the other cluster nodes when they start up again.

#rabbit1$ rabbitmqctl stop
Stopping and halting node rabbit@rabbit1 ...done.
#rabbit2$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]},{ram,[rabbit@rabbit3]}]},
{running_nodes,[rabbit@rabbit3,rabbit@rabbit2]}]
...done.

#rabbit1$ rabbitmq-server -detached
#rabbit1$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]},{ram,[rabbit@rabbit3]}]},
{running_nodes,[rabbit@rabbit2,rabbit@rabbit1,rabbit@rabbit3]}]
...done.

Breaking up a cluster :

Nodes need to be removed explicitly from a cluster when they are no longer meant to be part of it.

#rabbit3$ rabbitmqctl stop_app
Stopping node rabbit@rabbit3 ...done.
#rabbit3$ rabbitmqctl reset
Resetting node rabbit@rabbit3 ...done.
#rabbit3$ rabbitmqctl start_app
Starting node rabbit@rabbit3 ...done.

#rabbit3$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3]}]
...done.

We can also remove nodes remotely.

#rabbit1$ rabbitmqctl stop_app
Stopping node rabbit@rabbit1 ...done.
#rabbit2$ rabbitmqctl forget_cluster_node rabbit@rabbit1
Removing node rabbit@rabbit1 from cluster ...
...done.

Note that rabbit1 still thinks its clustered with rabbit2, and trying to start it will result in an error.

#rabbit1$ rabbitmqctl start_app
Starting node rabbit@rabbit1 ...
Error: inconsistent_cluster: Node rabbit@rabbit1 thinks it's clustered with node rabbit@rabbit2, but rabbit@rabbit2 disagrees
#rabbit1$ rabbitmqctl reset
Resetting node rabbit@rabbit1 ...done.
#rabbit1$ rabbitmqctl start_app
Starting node rabbit@mcnulty ...
...done.

Auto-configuration of a cluster:

Instead of configuring clusters “on the fly” using the cluster command, clusters can also be set up via the RabbitMQ configuration file. The file should set the cluster_nodes field in the rabbit application to a tuple contanining a list of rabbit nodes, and an atom – either disc or ram – indicating whether the node should join them as a disc node or not.

If cluster_nodes is specified, RabbitMQ will try to cluster to each node provided, and stop after it can cluster with one of them.
RabbitMQ will try cluster to any node which is online that has the same version of Erlang and RabbitMQ. If no suitable nodes are found, the node is left unclustered. Note that the cluster configuration is applied only to fresh nodes. A fresh nodes is a node which has just been reset or is being start for the first time.

For instance, Create the RabbitMQ config file with the contents:

[{rabbit,
[{cluster_nodes, {['rabbit@rabbit1', 'rabbit@rabbit2', 'rabbit@rabbit3'], disc}}]}].

Since we want rabbit@rabbit3 to be a ram node, we need to specify that in its configuration file:

[{rabbit,
[{cluster_nodes, {['rabbit@rabbit1', 'rabbit@rabbit2', 'rabbit@rabbit3'], ram}}]}].

Once we have the configuration files in place, we simply start the nodes:

#rabbit1$ rabbitmq-server -detached
#rabbit2$ rabbitmq-server -detached
#rabbit3$ rabbitmq-server -detached

We can see that the three nodes are joined in a cluster,

#rabbit1$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]},{ram,[rabbit@rabbit3]}]},
{running_nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]
...done.
#rabbit2$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]},{ram,[rabbit@rabbit3]}]},
{running_nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]
...done.
#rabbit3$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]},{ram,[rabbit@rabbit3]}]},
{running_nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]
...done.

To set Admin Panel check how to enable Web Interface
Thanks you.

Posted in Rabbitmq | Tagged | Leave a comment

Install erlang on ubuntu

Install erlang on ubuntu :

Remove older erlang if any :

#sudo apt-get remove erlang
#sudo apt-get autoremove erlang
#sudo apt-get purge erlang

Installation using repository :

#sudo apt-get install erlang erlang-doc

Manual Installation using repository :

Download Erlang

1. Adding repository entry

To add Erlang Solutions repository (including our public key for apt-secure) to your system, call the following commands:


#wget http://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
#sudo dpkg -i erlang-solutions_1.0_all.deb

Alternatively: adding the repository entry manually

Add one of the following lines to your /etc/apt/sources.list (according to your distribution):

deb http://packages.erlang-solutions.com/debian raring contrib
deb http://packages.erlang-solutions.com/debian quantal contrib
deb http://packages.erlang-solutions.com/debian precise contrib
deb http://packages.erlang-solutions.com/debian oneiric contrib
deb http://packages.erlang-solutions.com/debian lucid contrib

To verify which distribution you are running, run lsb_release -c in console.

Next, add the Erlang Solutions public key for apt-secure using following commands:

#wget http://packages.erlang-solutions.com/debian/erlang_solutions.asc
#sudo apt-key add erlang_solutions.asc

2. Installing Erlang

Refresh the repository cache and install the erlang package.

#sudo apt-get update
#sudo apt-get install erlang

Manual Installation by compiling binary : (Replace package version as per need)

#sudo apt-get -y install build-essential m4 libncurses5-dev libssh-dev unixodbc-dev libgmp3-dev libwxgtk2.8-dev libglu1-mesa-dev fop xsltproc default-jdk
#wget http://www.erlang.org/download/otp_src_R16B01.tar.gz
#tar -xvzf otp_src_R16B01.tar.gz
#chmod -R 777 otp_src_R16B01
#cd otp_src_R16B01
#./configure
#make
#sudo make install

Thanks you.

Posted in Erlang | Tagged , | Leave a comment