Killing child process in shell script

Killing child process in shell script:

Many time we need to kill child process which are hanged or block for some reason. eg. FTP connection issue.

There are two approaches,

1) To create separate new parent for each child which will monitor and kill child process once timeout reached.

Create test.sh as follows,

#!/bin/bash
declare -a CMDs=("AAA" "BBB" "CCC" "DDD")
for CMD in ${CMDs[*]}; do
(sleep 10 & PID=$!; echo "Started $CMD => $PID"; sleep 5; echo "Killing $CMD => $PID"; kill $PID; echo "$CMD Completed.") &
done
exit;

and watch processes which are having name as ‘test’ in other terminal using following command.

watch -n1 'ps x -o "%p %r %c" | grep "test" '

Above script will create 4 new child processes and their parents. Each child process will run for 10sec. But once timeout of 5sec reach, thier respective parent processes will kill those childs.
So child won’t be able to complete execution(10sec).
Play around those timings(switch 10 and 5) to see another behaviour. In that case child will finish execution in 5sec before it reaches timeout of 10sec.

2) Let the current parent monitor and kill child process once timeout reached. This won’t create separate parent to monitor each child. Also you can manage all child processes properly within same parent.

Create test.sh as follows,

#!/bin/bash
declare -A CPIDs;
declare -a CMDs=("AAA" "BBB" "CCC" "DDD")
CMD_TIME=15;
for CMD in ${CMDs[*]}; do
(echo "Started..$CMD"; sleep $CMD_TIME; echo "$CMD Done";) &
CPIDs[$!]="$RN";
sleep 1;
done
GPID=$(ps -o pgid= $$);
CNT_TIME_OUT=10;
CNT=0;
while (true); do
declare -A TMP_CPIDs;
for PID in "${!CPIDs[@]}"; do
echo "Checking "${CPIDs[$PID]}"=>"$PID;
if ps -p $PID > /dev/null ; then
echo "-->"${CPIDs[$PID]}"=>"$PID" is running..";
TMP_CPIDs[$PID]=${CPIDs[$PID]};
else
echo "-->"${CPIDs[$PID]}"=>"$PID" is completed.";
fi
done
if [ ${#TMP_CPIDs[@]} == 0 ]; then
echo "All commands completed.";
break;
else
unset CPIDs;
declare -A CPIDs;
for PID in "${!TMP_CPIDs[@]}"; do
CPIDs[$PID]=${TMP_CPIDs[$PID]};
done
unset TMP_CPIDs;
if [ $CNT -gt $CNT_TIME_OUT ]; then
echo ${CPIDs[@]}"PIDs not reponding. Timeout reached $CNT sec. killing all childern with GPID $GPID..";
kill -- -$GPID;
fi
fi
CNT=$((CNT+1));
echo "waiting since $b secs..";
sleep 1;
done
exit;

and watch processes which are having name as ‘test’ in other terminal using following command.

watch -n1 'ps x -o "%p %r %c" | grep "test" '

Above script will create 4 new child processes. We are storing pids of all child processes and looping over them to check if they are finished their execution or still running.
Child process will execution till CMD_TIME time. But if CNT_TIME_OUT timeout reach , All children will get killed by parent process.
You can switch timing and play around with script to see behaviour.
One drawback of this approach is , it is using group id for killing all child tree. But parent process itself belong to same group so it will also get killed.

You may need to assign other group id to parent process if you don’t want parent to be killed.

Following is one more example which monitors php script and kills if it reaches timeout.

1) test.sh

#!/bin/bash
LOG='log.txt'
trap 'echo "LineNo.$LINENO" >> $LOG; exit 1;' ERR SIGINT SIGTERM;
CMD="php test.php;"
echo $CMD
eval $CMD &>> $LOG &
GPID=$(ps -o pgid= $$);
CPID=$!
echo "PIDs: $GPID - $CPID "
CNT=0;
CNT_TIME_OUT=10;
while (true); do
if ps -p $CPID > /dev/null ; then
echo "$CPID is running..";
if [ $CNT -gt $CNT_TIME_OUT ]; then
echo "Timeout reached $CNT_TIME_OUT sec. killing $GPID.. breaking..";
kill -- -$GPID;
fi
else
echo "$CPID is completed. Breaking..";
break;
fi
CNT=$((CNT+1));
echo "waiting since $b secs..";
sleep 1;
done
exit;

1) test.php

<?php
$i=0;
$l=300;
while($i<$l) {
#throw new \Exception("Testing");
$date = new DateTime();
$date->add(DateInterval::createFromDateString('yesterday'));
echo $date->format('Y-m-d H:i:s') . " => $i\n";
sleep(1);
$i++;
echo "End => $i\n";
}
die;
?>

Thanks.

Posted in Shell Script | Tagged | Leave a comment

Setup solr using zookeeper ensemble on ubnutu

solrcould-cluster-single-collection-zookeeper-ensemble

Setup Oracle Java:
Follow quick step given below to setup java latest version on your system,

java -version
tar -zxvf jdk-8u45-linux-x64.tar.gz
sudo mkdir -p /usr/lib/jvm/jdk1.8.0_45
sudo mv jdk1.8.0_45/* /usr/lib/jvm/jdk1.8.0_45/
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0_45/bin/java" 1
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.8.0_45/bin/javac" 1
sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.8.0_45/bin/javaws" 1
sudo update-alternatives --set java /usr/lib/jvm/jdk1.8.0_45/bin/java
sudo update-alternatives --set javac /usr/lib/jvm/jdk1.8.0_45/bin/javac
sudo update-alternatives --set javaws /usr/lib/jvm/jdk1.8.0_45/bin/javaws
java -version
vim ~/.bashrc
export JAVA_HOME="/usr/lib/jvm/jdk1.8.0_45"
export PATH="$PATH:$JAVA_HOME/bin"
source ~/.bashrc
echo $JAVA_HOME
echo $PATH

For more detail kindly visit how-to-install-oracle-jdk-7-on-ubuntu-12-04

Setup Zookeeper Ensemble: (https://cwiki.apache.org/confluence/display/solr/Setting+Up+an+External+ZooKeeper+Ensemble)

Consider we have 2 servers (192.168.0.101, 192.168.0.111) and we are setting up 6 zookeeper nodes called znodes.
Follow quick step given below to setup Zookeeper Ensemble on your system,

wget 'http://mirror.symnds.com/software/Apache/zookeeper/stable/zookeeper-3.4.6.tar.gz'
tar -xvf zookeeper-3.4.6.tar.gz
sudo mkdir -p /usr/lib/zookeeper-3.4.6
sudo mv zookeeper-3.4.6/* /usr/lib/zookeeper-3.4.6/
cd /usr/lib/zookeeper-3.4.6/
sudo cp conf/zoo_sample.cfg conf/zoo_1.cfg
sudo vim conf/zoo_1.cfg

Add following configs,

tickTime=2000
dataDir=/var/lib/zookeeper/1/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.0.101:2888:3888
server.2=192.168.0.101:2889:3889
server.3=192.168.0.101:2890:3890
server.4=192.168.0.111:2888:3888
server.5=192.168.0.111:2889:3889
server.6=192.168.0.111:2890:3890

Repeat above setting for all servers.

Make data directories:(Repeat for all servers)

sudo mkdir -p /var/lib/zookeeper/1/
sudo sh -c 'echo "1" > /var/lib/zookeeper/1/myid'

Start servers:(Repeat for all servers)
sudo bin/zkServer.sh start zoo_1.cfg

Check Status:

bin/zkServer.sh status zoo_1.cfg
echo status | nc localhost 2187

Test Client:

bin/zkCli.sh -server 192.168.0.101:2181
[zk: 192.168.0.101:2181(CONNECTED) 1] ls /
[zk: 192.168.0.101:2181(CONNECTED) 2] get /configs/gettingstarted/solrconfig.xml
[zk: 192.168.0.101:2181(CONNECTED) 3] quit

Stop Servers:(Repeat for all servers)
sudo bin/zkServer.sh stop zoo_1.cfg

For more detail kindly visit Setup-zookeeper-ensemble-on-ubuntu

Setup Solr:

Consider we have 2 servers (192.168.0.101, 192.168.0.111) and we are setting up 2 solr instances in shard=1:replication:2 setup.
wget 'http://apache.mirrors.hoobly.com/lucene/solr/5.2.0/solr-5.2.0.tgz'
SolrCloud Standalone Setup using Embedded Zookeeper (Testing environment): (http://lucene.apache.org/solr/quickstart.html)
Please follow above link for demos.

SolrCloud Mode Setup using External Zookeeper Ensemble (Testing environment): (https://cwiki.apache.org/confluence/display/solr/Solr+Start+Script+Reference)
cd solr-5.2.0/tar -zxvf solr-5.2.0.tgz

Create Node:

mkdir -p example/cloud/node1/solr
cp server/solr/solr.xml example/cloud/node1/solr
mkdir -p example/cloud/node2/solr
cp server/solr/solr.xml example/cloud/node2/solr
bin/solr start -cloud -s example/cloud/node1/solr -h 192.168.0.101 -p 8983 -z 192.168.0.101:2181,192.168.0.101:2182,192.168.0.101:2183,192.168.0.111:2184,192.168.0.111:2185,192.168.0.111:2186
bin/solr start -cloud -s example/cloud/node1/solr -h 192.168.0.111 -p 8983 -z 192.168.0.101:2181,192.168.0.101:2182,192.168.0.101:2183,192.168.0.111:2184,192.168.0.111:2185,192.168.0.111:2186

Create Collection:
bin/solr create -c gettingstarted -d basic_configs -rf 2

Status:

bin/solr status
bin/solr healthcheck -c gettingstarted

Delete Collection:
bin/solr delete -c gettingstarted

Restart:

bin/solr restart -cloud -s example/cloud/node1/solr -h 192.168.0.101 -p 8983 -z 192.168.0.101:2181,192.168.0.101:2182,192.168.0.101:2183,192.168.0.111:2184,192.168.0.111:2185,192.168.0.111:2186
bin/solr restart -cloud -s example/cloud/node1/solr -h 192.168.0.111 -p 8983 -z 192.168.0.101:2181,192.168.0.101:2182,192.168.0.101:2183,192.168.0.111:2184,192.168.0.111:2185,192.168.0.111:2186

Stop Node:
bin/solr stop -all;

Clean all Testing files:
rm -Rf example/cloud/

SolrCloud as a Service using External Zookeeper Ensemble (Production environment): (https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production)

Run Installation script:

tar xzf solr-5.2.0.tgz solr-5.2.0/bin/install_solr_service.sh --strip-components=2
sudo bash ./install_solr_service.sh -help
sudo bash ./install_solr_service.sh solr-5.0.0.tgz -i /opt -d /var/solr -u solr -s solr -p 8983
OR
sudo bash ./install_solr_service.sh solr-5.2.0.tgz
id: solr: no such user
Creating new user: solr
Adding system user `solr' (UID 109) ...
Adding new group `solr' (GID 116) ...
Adding new user `solr' (UID 109) with group `solr' ...
Creating home directory `/home/solr' ...
Extracting solr-5.2.0.tgz to /opt
Creating /etc/init.d/solr script ...
Adding system startup for /etc/init.d/solr ...
Waiting to see Solr listening on port 8983 [/]
Started Solr server on port 8983 (pid=1704). Happy searching!
Service solr installed.
sudo service solr status
sudo service solr stop

Setup SolrCloud:

To run Solr in SorlCloud add following setting in environment specific include file (/var/solr/solr.in.sh),
ZK_HOST="192.168.0.101:2181,192.168.0.101:2182,192.168.0.101:2183,192.168.0.111:2184,192.168.0.111:2185,192.168.0.111:2186"

If you’re using a ZooKeeper instance that is shared by other systems, it’s recommended to isolate the SolrCloud znode tree using ZooKeeper’s chroot support. For instance, to ensure all znodes created by SolrCloud are stored under /solr, you can put /solr on the end of your ZK_HOST connection string, such as:
ZK_HOST="192.168.0.101:2181,192.168.0.101:2182,192.168.0.101:2183,192.168.0.111:2184,192.168.0.111:2185,192.168.0.111:2186/solr"

If using a chroot for the first time, you need to bootstrap the Solr znode tree in ZooKeeper by using the zkcli.sh script, such as:
/opt/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost 192.168.0.101:2181 -cmd bootstrap -solrhome /var/solr/data

If above script couldn’t be able to create ‘solr’ chroot because of 0 core (As we have not create any core yet) then create one by zookeeper client,

/usr/lib/zookeeper-3.4.6/bin/zkCli.sh -server 192.168.0.101:2181
[zk: 192.168.0.101:2181(CONNECTED) 2] create /solr solr

Note: Above fix is not specified anywhere in Solr Doc.
sudo service solr start

Upload a configuration directory: (https://cwiki.apache.org/confluence/display/solr/Command+Line+Utilities)
/opt/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost 192.168.0.101:2181/solr -cmd upconfig -confname data_driven_schema_configs -confdir /opt/solr/server/solr/configsets/data_driven_schema_configs/conf

To delete wrong config use zookeeper client,

/usr/lib/zookeeper-3.4.6/bin/zkCli.sh -server 192.168.0.101:2181
[zk: 192.168.0.111:2184(CONNECTED) 8] rmr /configs

Create Collection:(https://cwiki.apache.org/confluence/display/solr/Collections+API)

http://host:port/solr/admin/collections?action=CREATE&name=gettingstarted&numShards=1&replicationFactor=1&maxShardsPerNode=1&collection.configName=data_driven_schema_configs

http://host:port/solr/admin/collections?action=DELETE&name=gettingstarted

For more detail kindly visit Install-solr-on-ubuntu

Important Download links,

Search Engine research:

https://www.elastic.co/products/elasticsearch

http://solr-vs-elasticsearch.com/

http://lucene.apache.org/solr/quickstart.html

https://cwiki.apache.org/confluence/display/solr/Apache+Solr+Reference+Guide

http://wiki.apache.org/solr/FrontPage

https://cwiki.apache.org/confluence/display/solr/Uploading+Structured+Data+Store+Data+with+the+Data+Import+Handler

Requirement:

http://lucene.apache.org/solr/5_1_0/SYSTEM_REQUIREMENTS.html

Downloads:

http://mirror.reverse.net/pub/apache/lucene/solr/5.1.0/solr-5.1.0.tgz

http://apache.mirrors.hoobly.com/lucene/solr/5.2.0/solr-5.2.0.tgz

http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz

http://archive.apache.org/dist/lucene/solr/

http://mirror.symnds.com/software/Apache/zookeeper/stable/zookeeper-3.4.6.tar.gz

Setup:

https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production

http://zookeeper.apache.org/doc/r3.4.6/zookeeperStarted.html

PHP Client:

http://wiki.apache.org/solr/SolPHP

https://pecl.php.net/package/solr

http://php.net/manual/en/book.solr.php

Symfony Bundle:

https://packagist.org/packages/solarium/solarium

https://packagist.org/packages/nelmio/solarium-bundle

https://packagist.org/packages/reprovinci/solr-php-client

https://packagist.org/packages/floriansemm/solr-bundle

https://packagist.org/packages/internations/solr-utils

https://packagist.org/packages/internations/solr-query-component


Thank you.

Posted in Java, Solr, Zookeeper | Tagged , , , | Leave a comment

Setup ZooKeeper Ensemble on Ubuntu

Setup ZooKeeper Ensemble on Ubuntu:

Download Apache ZooKeeper:

The first step in setting up Apache ZooKeeper is, of course, to download the software. It’s available from http://zookeeper.apache.org/releases.html.

#wget http://mirror.symnds.com/software/Apache/zookeeper/stable/zookeeper-3.4.6.tar.gz
#tar -xvf zookeeper-3.4.6.tar.gz

Configure the instance:
Lets create one in conf/zoo1.cfg:

#sudo mkdir -p /usr/lib/zookeeper-3.4.6
#sudo mv zookeeper-3.4.6/* /usr/lib/zookeeper-3.4.6/
#cd /usr/lib/zookeeper-3.4.6/
#cp conf/zoo_sample.cfg conf/zoo1.cfg
#vim conf/zoo1.cfg

Add following settings,

tickTime=2000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.0.1:2888:3888
server.2=192.168.0.2:2888:3888
server.3=192.168.0.3:2888:3888

The parameters are as follows:
tickTime: Part of what ZooKeeper does is to determine which servers are up and running at any given time, and the minimum session time out is defined as two “ticks”. The tickTime parameter specifies, in miliseconds, how long each tick should be.
dataDir: This is the directory in which ZooKeeper will store data about the cluster. This directory should start out empty.
clientPort: This is the port on which Solr will access ZooKeeper.
initLimit: Amount of time, in ticks, to allow followers to connect and sync to a leader. In this case, you have 5 ticks, each of which is 2000 milliseconds long, so the server will wait as long as 10 seconds to connect and sync with the leader.
syncLimit: Amount of time, in ticks, to allow followers to sync with ZooKeeper. If followers fall too far behind a leader, they will be dropped.
server.X: These are the IDs and locations of all servers in the ensemble, the ports on which they communicate with each other. The server ID must additionally stored in the /myid file and be located in the dataDir of each ZooKeeper instance. The ID identifies each server, so in the case of this first instance, you would create the file /var/lib/zookeeper/1/myid with the content “1″.

Once this file is in place, you’re ready to start the ZooKeeper instance.

Then create /var/lib/zookeeper directory And create myid file, so each node can identify itself:

#sudo mkdir -p /var/lib/zookeeper
#echo "1" > /var/lib/zookeeper/myid

Where “1″ is the node number (so put “2″ for the next node and so on)
Do same for all nodes.

Standalone Setup:

You can also setup multiple instances on localhost. You just need to create separate data directory per instance for storing id and data and make all instances listen on different ports.

clientPort=2181
clientPort=2182
clientPort=2183

dataDir=/var/lib/zookeeper/1/
dataDir=/var/lib/zookeeper/2/
dataDir=/var/lib/zookeeper/3/

server.1=localhost:2888:3888
server.2=localhost:2889:3889
server.3=localhost:2890:3890

#echo "1" > /var/lib/zookeeper/1/myid
#echo "2" > /var/lib/zookeeper/2/myid
#echo "3" > /var/lib/zookeeper/3/myid

Once you have each node set up, you can start ZooKeeper by issuing on each node:

bin/zkServer.sh start zoo1.cfg
bin/zkServer.sh start zoo2.cfg
bin/zkServer.sh start zoo3.cfg

Check servers are running,
$bin/zkServer.sh status zoo1.cfg
$bin/zkServer.sh status zoo2.cfg
$bin/zkServer.sh status zoo3.cfg

$echo status | nc localhost 2181
$echo status | nc localhost 2182
$echo status | nc localhost 2183

Connect client,

/bin/zkCli.sh -server localhost:2181
[zk: localhost:2181(CONNECTED) 1] ls /
[zk: localhost:2181(CONNECTED) 2] ls /configs/
[zk: localhost:2181(CONNECTED) 3] ls /collections/
[zk: localhost:2181(CONNECTED) 4] get /configs/gettingstarted/solrconfig.xml
[zk: localhost:2181(CONNECTED) 5] quit

Stop them,

bin/zkServer.sh stop zoo1.cfg
bin/zkServer.sh stop zoo2.cfg
bin/zkServer.sh stop zoo3.cfg

Thanks you.

Posted in Zookeeper | Tagged | Leave a comment

How to install tor in ubuntu

tor-browser-download

How to install tor in ubuntu:

What is Tor?

Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.

Why Anonymity Matters?

Tor protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location.

For more information visit torproject.

How to setup tor:

$wget https://www.torproject.org/dist/torbrowser/4.0.4/tor-browser-linux64-4.0.4_en-US.tar.xz
$tar xf tor-browser-linux64-4.0.4_en-US.tar.xz
$tor-browser_en-US/start-tor-browser

Thanks.

Posted in Tor | Tagged | Leave a comment

Setup zabbix on ubuntu

Setup zabbix on ubuntu:

Zabbix is the ultimate open source enterprise-level software designed for monitoring availability and performance of IT infrastructure components.
For more details visit following links,
www.zabbix.com
www.zabbix.org

Install Zabbix:

# wget http://repo.zabbix.com/zabbix/2.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_2.2-1+precise_all.deb
# sudo dpkg -i zabbix-release_2.2-1+precise_all.deb
# sudo apt-get update

Install Zabbix Component :

Zabbix server – a central process of Zabbix software that performs monitoring, interacts with Zabbix proxies and agents, calculates triggers, sends notifications; a central repository of data
# sudo apt-get install zabbix-server-mysql

Web frontend – the web interface provided with Zabbix
# sudo apt-get install zabbix-frontend-php

Zabbix agent – a process deployed on monitoring targets to actively monitor local resources and applications
# sudo apt-get install zabbix-agent

Zabbix proxy – a process that may collect data on behalf of Zabbix server, taking some processing load off of the server

Zabbix API – Zabbix API allows you to use the JSON RPC protocol to create, update and fetch Zabbix objects (like hosts, items, graphs and others) or perform any other custom tasks

Zabbix Setup :

PHP configuration for Zabbix frontend:

Apache configuration file for Zabbix frontend is located in /etc/apache2/conf.d/zabbix. Some PHP settings are already configured.

php_value max_execution_time 300
php_value memory_limit 128M
php_value post_max_size 16M
php_value upload_max_filesize 2M
php_value max_input_time 300
# php_value date.timezone Asia/Kolkata

It’s necessary to uncomment the “date.timezone” setting and set the correct timezone for you. After changing the configuration file restart the apache web server.
OR
sudo vim /etc/php5/apache2/php.ini
date.timezone = Asia/Kolkata
#sudo service apache2 restart

Follow installation steps here, http://localhost/zabbix/setup.php#

Zabbix configuration :

Appliance Zabbix setup has the following passwords and other configuration changes:

1: Passwords
System:
root:zabbix
zabbix:zabbix
Database:
root:zabbix
zabbix:zabbix
Zabbix frontend:
Admin:zabbix

If you change frontend password, do not forget to update password setting web monitoring (Configuration → Hosts, Web for host “Zabbix server”).

To change the database user password it has to be changed in the following locations:
MySQL;
zabbix_server.conf;
zabbix.conf.php.

2: File locations

Configuration files are placed in /etc.
Zabbix logfiles are placed in /var/log/zabbix.
Zabbix frontend is placed in /usr/share/zabbix.
Home directory for user zabbix is /etc/zabbix.

3: Changes to Zabbix configuration

Server name for Zabbix frontend set to “Zabbix 2.2 Appliance”;
Frontend timezone is set to Europe/Riga, Zabbix home (this can be modified in /etc/php5/apache2/php.ini);
Disabled triggers and web scenarios are shown by default to reduce confusion.

Thanks you.

Posted in Zabbix | Tagged | Leave a comment

Setup vagrant on ubuntu

Setup vagrant on ubuntu:

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.
To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.

More details can be found at vagrantup

Steps:

1) Install dependency :
VirtualBox is a general-purpose full virtualizer for x86 hardware, targeted at server, desktop and embedded use. (VirtualBox)

Add one of the following lines according to your distribution to your /etc/apt/sources.list:
deb http://download.virtualbox.org/virtualbox/debian precise contrib

#wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
#sudo apt-get install virtualbox-4.3

2) Download and install vagrantup.

#wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.6.5_x86_64.deb
#sudo dpkg -i /home/hemant/Downloads/vagrant_1.6.5_x86_64.deb

3) Setup vagrant.
Create a root directory for your project and navigate in it:

#mkdir myproject
#cd myproject


Next, run the initialization command:

a) Pass box as parameter:
#vagrant init [box-name] [box-url]
If a first argument is given, it will prepopulate the config.vm.box setting in the created Vagrantfile. [ex. hashicorp/precise64]
If a second argument is given, it will prepopulate the config.vm.box_url setting in the created Vagrantfile. [ex. https://vagrantcloud.com/hashicorp/boxes/precise64]
b) Create a Vagrantfile and update default configuraion.

#vagrant init
#vim Vagrantfile

Modify file,
#config.vm.box = “precise64″

More Vagrantfile oprions,

#Install dependencies using provision,
config.vm.provision :shell, :path => "install.sh"
#Create a private network, which allows host-only access to the machine using a specific IP.
config.vm.network :private_network, ip: "192.168.1.1"
#Share an additional folder to the guest VM.
config.vm.synced_folder "/h_data", "/v_data", owner: "root", group: "root", :mount_options => ['dmode=777,fmode=777']

This will tell it to use this new box. More boxes can be discover here Vagrantcloud.
Save the file and exit. Now you can deploy the guest machine with the following command:

#vagrant up

This will bring up a VPS running Ubuntu 12.04 LTS 64Bit. To make use of it, you can easily SSH into it:

#vagrant ssh

Vagrant will share the project root folder from the host machine (the one containing the Vagrantfile) with a folder on the guest machine, /vagrant.

You can exit and go back to the host with the following command:

#exit

To stop and remove the guest machine and all traces of it,

#vagrant destroy

Use reload command is usually required for changes made in the Vagrantfile to take effect. After making any modifications to the Vagrantfile

#vagrant reload

The configured provisioners will not run again, by default. You can force the provisioners to re-run by specifying the –provision flag.

To add more boxes for other projects use following command adds ‘hashicorp/precise64′ box to Vagrant.

#vagrant box add hashicorp/precise64

Thanks you.

Posted in Virtualization | Tagged , | Leave a comment

Ubuntu fix Shellshock bug

Ubuntu fix Shellshock Bug :

Test if your system is vulnerable,

$ env X="() { :;}; echo shellshock" `which bash` -c "echo completed"
shellshock
completed

If you got “shellshock” in output then it is vulnerable.

Bash has functions, though in a somewhat limited implementation, and it is possible to put these Bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable).

Download package as per your OS version from fix
For 12.04 64bit LTS,

#wget https://launchpad.net/~ubuntu-security/+archive/ubuntu/ppa/+build/6400987/+files/bash_4.2-2ubuntu2.2_amd64.deb
#dpkg -i /home/hemant/Downloads/bash_4.2-2ubuntu2.2_amd64.deb

The patch used to fix this flaw, ensures that no code is allowed after the end of a Bash function.
So if you run the above example with the patched version of Bash, you should get an output similar to:

$ env X="() { :;}; echo shellshock" `which bash` -c "echo completed"
/bin/bash: warning: X: ignoring function definition attempt
/bin/bash: error importing function definition for `X'
completed

Now, you are safe from Shockshell bug.

For details can be found at,

Redhat

NVD

Thank you.

Posted in Security, Uncategorized | Tagged | Leave a comment

PHP multiprocessing using fork

PHP multiprocessing using fork:

Process Control support in PHP implements the Unix style of process creation, program execution, signal handling and process termination.

Find more details at PHP fork

The pcntl_fork() function creates a child process that differs from the parent process only in its PID and PPID.
On success, the PID of the child process is returned in the parent’s thread of execution, and a 0 is returned in the child’s thread of execution.
On failure, a -1 will be returned in the parent’s context, no child process will be created, and a PHP error is raised.
The pcntl_wait($status) function protect against zombie children.

PHP fork is useful when we want process multiple task parallely.

Let’s assume we have following task to perform.
1) updateCache : Time 3s.
1) sentEmail : Time 2s.
1) logEntry : Time 1s.
1) addDBEntry : Time 5s.

If we perform above task in serially the it will take 10 sec(5+3+2+1).
But, if we use parallel processing then it will reduce to 5 sec(largest time among them) using wait to protect against zombie children without returning immediately.

You can also return parent process and make child processes run in background as zombie.

$time_start = microtime(true);

function executeTask($types){

foreach($types as $type){
switch($type){
case 'updateCache':
sleep(3);
echo "updateCache \n";
break;

case 'sentEmail':
sleep(2);
echo "sentEmail \n";
break;

case 'logEntry':
sleep(1);
echo "logEntry \n";
break;

case 'addDBEntry':
sleep(5);
echo "addDBEntry \n";
break;
default:
break;
}
}
}

$useFork = true;

if($useFork){
$pid1 = pcntl_fork();
if ($pid1 == -1) {
$types = array("updateCache","sentEmail","logEntry","addDBEntry");
executeTask($types);
} else if ($pid1) {
$pid2 = pcntl_fork();
if ($pid2 == -1) {
$types = array("updateCache","sentEmail","logEntry");
executeTask($types);
} else if ($pid2) {
$types = array("updateCache");
executeTask($types);
} else {
$types = array("sentEmail","logEntry");
executeTask($types);
}
pcntl_wait($status);

} else {
$types = array("addDBEntry");
executeTask($types);
}

pcntl_wait($status);
if($pid1 && $pid2){
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Total time: $time seconds\n";
}
} else {
$types = array("updateCache","sentEmail","logEntry","addDBEntry");
executeTask($types);
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Total time: $time seconds\n";

}

Thank you.

Posted in Php | Leave a comment

Git branching model

Git branching model :

Following model can be used for feature and fixes.

Consideration:

Servers:
- Local Server
- Remote Server
- Production Server

Branches:
- Release branch : “release”
- Development branch : “master”
- Feature branch : “feature”

Local Server:

1)Check current status:

- a)List local branch.

$git branch

- b)List local and remote-tracking branches.

$git branch -a

- c)List local and all remote branches.

$git ls-remote --heads

- d)Check summary of Repository.

$git remote show origin

2)Pull latest updates

$git pull origin master;

3)Create branch: Create branch and make changes.


$git branch feature
$git checkout feature
or
$git checkout -b feature

Make changes.

4) Add and Commit changes :


$git add file_name
$git commit -m "message"

5)Add new remote branch: Push branch to origin server.

$git push -u origin feature

Development Server:

1)Fetch new branches: Fetch all remote branches.

$git fetch -all

2)Checkout new remote branch: Checkout and track new remote branch.

$git checkout -tb feature origin/feature

Test code.

Local Server:

1)Merge branch: Merge ‘feature’ branch to ‘master’ branch.


$git checkout master;
$git merge feature;

2)Make a feature release.


$git checkout release;
$git pull origin release;
$git merge master;
$git fetch --tags;
$git describe;
$git tag -a -m 'log message'
$git push origin release --tags
$git checkout master;

3)Delete branch: Delete new branch locally and remotely.


$git branch -D feature

$git push origin --delete feature
or
$git push origin :feature

3)Delete branch: Deletes all stale tracking branches under .
These stale branches have already been removed from the remote repository
referenced by , but are still locally available in “remotes/“.


git remote prune origin

When you use git push origin :feature, it automatically removes origin/feature, so when you ran git remote prune origin, you have pruned some branch that was removed by someone else.
It’s more likely that your co-workers now need to run git prune to get rid of branches you have removed.

Production Server:

1) Fetch and Checkout new tag.

$git fetch --tags
$git checkout

Note:Use “stash” commands to avoid conflicts for uncommitted changes.


$git stash
$git stash pop
$git stash apply

Thanks you.

Posted in Revision control | Tagged | Leave a comment

Reset mysql root password

Reset mysql root password :

If you set a root password previously, but have forgotten it, you can set a new password. The following sections provide instructions for Unix systems

1) Stop Mysql server.

#sudo /etc/init.d/mysql stop

OR

#sudo kill `sudo cat /var/run/mysqld/mysqld.pid`

2) Now lets start up the mysql daemon and skip the grant tables which store the passwords.

#sudo mysqld_safe --skip-grant-tables &

3) Now you should be able to connect to mysql without a password.

#mysql --user=root mysql

4) Update root password.


mysql>update user set Password=PASSWORD('rootnewpassword') where user='rootuser';

mysql>flush privileges;

mysql>exit;

5) Stop Mysql and Start it again.


#sudo kill `sudo cat /var/run/mysqld/mysqld.pid`

#sudo /etc/init.d/mysql start

Thank you.

Posted in Mysql | Tagged | Leave a comment