Wednesday, October 23, 2013

Epoch Time in Shell Script for the expected Date

Below is the code to get epoch time in shell script :

Code is self-explanatory for the shell script dev's.

startdate_ymd=`date --date="31 day ago" +%Y%m%d`

startdate=`date --date="${startdate_ymd}" +%s`

startdatemillis=`expr ${startdate} \\* 1000`


enddate_ymd=`date --date="1 day ago" +%Y%m%d`

enddate=`date --date="${enddate_ymd}" +%s`

enddatemillis=`expr ${enddate} \\* 1000`

Friday, September 27, 2013

Hadoop - Map Reduce Framework

Map Reduce Framework
  • isSplittable method is passed in InputFormat on each fileName. Based on the booleanExpression true / false in the files mapper spawns the task. You can write custom FileInputFormat overriding isSpilttable to false to execute 1 single mapper for multiple storage blocks for the given file.
  • Key Value pair on the MapReduce should always be used Writable Interface with Serialized and deserialize format in hadoop to write the data in the context (Eg : Text implements WritableInterface, IntWritable, NullWritable..)
  • When MapReduce job is running the diskI/O and network I/O will increase. Because data written to local disk (hdfs) and data transferred between reducer to different nodes through network.   
  • MapReduce framework programming model isolated execution of tasks with one-time copying shuffle and sort data to reducer. Between reducer task there won't be any communication. You can RMI or messaging (Apache Kafka) externally mechanism to communicate between among tasks. 
Map  
  • You can pass 1 file or multiple file or directories to Mapper, based on the N number of blocks stored in hdfs(Default block size 64MB) for the input files, the number of map task will get initiate.  1 single map task is processed 1 inputSplit message. Some times 1 block message if the block size is big then it can be split multiple by inputSplit for better performance.
  • If there is zero reducer than mapper can write each mapTask in the configured output directory. Based on N number of mapTasks N number of output files will get generated in HDFS.
  • Each MapTask generate 1 single file and written the data to local disk (HDFS). Later reducer will pick the data from disk to process. 
  • (output of mappers - data path intermediate K/V place )  Intermediate key-value pairs are written to local disk of the machine running mapTask and later copied to machine running reduce tasks.
Reducer
  • If the reducer is set 1 then 1 reducer takes all the output from the mapper and write the output of the reducer in the single file in HDFS
  • Reducer collects all the values for the given key and pass the key with all the values to reduce method to process upon map tasks 100% completion reduce() method kick. If reducer shows 10% while map task 85% then reduce task copying / transferring map files to process.
  • Reducer writes the output file to HDFS same as mapper. Mapper does if reducer is set to 0 or no reducer class. Both number of files will be same as map task count or reduce task count.
Infrastructure
1) Lets say you have 10 datanode and taskTracker (node1..node10). If a single reducer runs in node4 and it produce output for single block first as per block placement policy the data block will be written in node4 and as per replication factor the data will replicated to other nodes. Always in which node the reducer task runs the output will be kept in the same node first and get replicates

2) Hadoop has "speculative execution feature" that if  one task tracker / node fails after task submits jobtracker will smartly assign the task more and communicate with available task trackers.

3) If the file (block) is stored in 3 datanodes. And even if 2 datanode fails and if the data available even in 1 datanode then  hadoop gets data to process.
  
 Hadoop Job
   If you not configured your linux machine with hadoop configuration pointing nameNode, dataNode..then hadoop job will run in localJobRunner mode using your local fileSystem

NameNode
The NameNode role is to transfer the block location according to the fileName given. So, to make the activity faster NameNode use RAM to store the fileName, block location and other file related metaData in RAM.

Combiner
For associative and commutative you should use combiner in your MapReduce jobs.In job driver use job.setCombinerClass(WordCountCombiner.class). It increase the efficiency of your mapReduce job. Each individual machine mapTask completion this combiner class will invoke and process aggregate and store the data for the reducer. Combiners perform local aggregation of word counts, thereby reducing the number of key-value pairs that need to be shuffled across the network to the reducers.
Ex : Lets say wordCount program split multiple map task. Each mapTask output we will have tuples [the, 1][the,1] for reducer. If you have combiner then combiner will combine [the,2] upon mapTask output immediate and keep for reducer. Now, mutliple mapTask executes will have '[the,2], '[the,3]' as combiner combiner already. So, reducer can easily do '[the,5]' because of only less data combiner already did most of the job. Combiner is not alternate for reducer. It can help for reducer. Reducer output can written to the file context. combiner output will be given to reducer and input take from mapper.

Partitioner 
Partitioner is mainly supported to use for small files, you can specify input to reducer to process specific partition. Partitioner will slows down your process so justify yourself on your usage.
Ex: You have customer sales report 1 huge files for 12 months data and customers wants 12 files of each month data separate. In this case mapper can provide unique key as datetime you can use partitioner 12 with 12 reducer job.setNumofReducers(12) and pass each partition month data to each reducer. So, each reducer will produce 1 file (12 files total) with each month record.

TaskTracker
After a taskTracker has been assigned a task it creates a TaskRunner instance to manage the task. The taskRunner prepares the environment and launches the new JVM for the task. This process for every task assigned to task tracker.

Based on reading the topic partitioning Map / Reduce job, the Number of maps is usually driven by the number of DFS blocks in the input files. Basically, the input file (Scan records of hbase inputTable is scanning DeviceMaster table and the table has only 1 block. So, the Map was created for only 1 task).

Every taskTracker will periodically sends heartbeat to jobTracker. If jobTracker generates a task it assign task to first responded heartbeat taskTracker. Based on number of pending task if the taskTracker slot is free it submits the pending task to free slots. If pending task is less then assign all task to the taskTracker.Second choice is ideally the taskTracker gets task which has  the data locally.
Especially, if job is submitted that will process a single input split, that job will contain a single map task then the heartbeat taskTracker will be assigned to process the task by jobTracker.

HDFS
HDFS is optimized for write-once. streaming access relatively to large files. You can append the data file if require. But, the better usage is to read files from the hdfs written once.

Block Data orphan
If the block data is orphan and some data is available in other block in other node, while the node process this record from 1 block it request other block in other node to provide the remaining information to process.  Lets say block 1 in DN1 has some information and block 2 in DN2 has more information of the same fileName (record) data then the request will goes to DN2 requesting the record remaining information and data will get transfer to DN1 to process the complete record.

SequenceFile
SequenceFile contains name of the classes used for the key and values as the header in the file. You can read sequenceFile in hadoop using command. "-text" will convert sequenceFile content to toString and display. -cat will display the raw data of the sequence file which is not human readable format.
$ hadoop fs -text sequenceFile

You can see in hbaseMaster Web UI (http://devnamenode:60010/master-status) for each table if you click you can see how many blocks the data is split-ted. You can split the blocks as required to increase the Map Task.

Note : You can also use JobConf Class (hadoop lib) to set the NumOfMapTask and NumOfReduceTask. But, in general leave it to hbase and it will do efficiently.

Ref : http://wiki.apache.org/hadoop/HowManyMapsAndReduces

Increase the map count when we use hbase.

In our System we use hbase db and have MapReduce job on top of it and scan results are the input parameters to map. I see map triggered only 1 Task in dev env. due to lesser data. In hbase Master page (http://devnamenode:60010/master-status) on the particular table i click and i split the regions into multiple by clicking split. Spitted into  3 regions. Then i  did major_compact on the same table. Once done i submit the job and the job spawned 4 Maps based on the regions.

If the scan output comes in multiple regions based on the same the mapper will trigger for each region output.

Powerful Hadoop CommandLine to manage files on HDFS.

Some useful Hadoop Commands
Hadoop replication factor
Command to increase the hadoop replication factor from 3 to 4.
Below  commands available if you want to replicate f1 file

$ hadoop fs -setrep 4 f1
$ hadoop fs -Ddfs.replication=4 -cp f1 f1.tmp; hadoop fs -rm f1; hadoop fs -mv f1.tmp f1
 
View the number of blocks for a given file
$ hadoop fsck [path] [options]

$ hadoop fsck /path/to/file -files -blocks
 
Create file in hdfs on the fly (Once you done type EOF Enter)
$ hadoop fs -put - file4.txt << EOF

CDH - Hadoop - Hbase - Adding a host to an existing cluster

1. Install CentOS version matches to the existing hadoop clusters. 
2. Configure static Ip, DNS, gateway and hostnames.
3. Enable ssh and disable firewall. 
4. In /etc/selinux/config file disable the SELINUX
5. In /etc/hosts file add the distributed cluster machines
6. In /etc/fstab hadoop partition add noatime with default on ext4 http://www.howtoforge.com/reducing-disk-io-by-mounting-partitions-with-noatime)
7. Install "yum install ntp" and start the service and add the service to chkconfig ntpd on. Sync the time with server time pool "ntpdate pool.ntp.org"
8. Login into cloudera Click 'Hosts'
9. Add Hosts
10. Type the hostname and search
11. select and click 'Install CDH on machine' 
12. Before you select which version to install check in master CDH version and do the same. (go to master and type hbase shell). In hbase start it will show hbase and CDH version.
13. Copy the .bashrc file settings from other server to the new server
14. Go to each and every service (TaskTracker, DataNodes and RegionServer and add the new server). Make sure Master is not checked while adding regionServer
15. Copy the library of (hadoop_lib_jars from svn into $HADOOP_HOME/lib) and restart MapReduce in CDH.
16. Restart client like azkaban once you modified the zookeeper quorum adding this host cluster

HBASE - RegionServer - Hbase Master failed to reach RegionServer

RegionServer was failed to respond hbase Master. Basically, in regionServer the zookeeper failed to respond due to GC happens and java stop-the-world.

Read the below blog which explains very very clever how to get rid of GC failure.

http://blog.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/

Below is some configuration changes we did on our side to avoid the issue as a system load test.

(Hbase configuration) Hbase zookeeper session timeout increased to 90 seconds from 40seconds and default 60seconds as per hbase guide max : 3minutes you can have. To collect GC on 1 GB on an avg. system takes 8 to 10 seconds. Since, we have HeapMemory configured 8 GB and GC can collect @ around 7 GB we may ended up failure on connection time out.

Pass java arguments : -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=60 -XX:PrintFLSStatistics=1 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/usr/lib/hbase/logs/logs/gc-$(hostname)-hbase.log

Enable MSLAB Allocation Scheme with default flush values - In hbase-0.92 its enabled by default.

Cluster should be in odd number. Because, zookeeper multi-server suggest to have in odd numbers.

HBASE - Corrupted Blocks

If you see in your dfshealth if any blocks corrupted warnings as below. Run the command and check your hbase health.

http://devnamenode:50070/dfshealth.jsp

WARNING : There are about 1 missing blocks. Please check the log or run fsck. 

Run the below command in hdfs to see the corrupted files status :

hadoop fsck /

Note : Check the corrupted files in hdfs / hbase and based you can see hdfs job or hbase blocks. See below samples : 

1.  /user/hdfs/.staging/job_201307121242_42849/job.split: MISSING 1 blocks of total size 81

Above Corrupted in hdfs when it run the job job_201307121242_42849 (2013 07 12 @ 12:42)

2. /hbase/.corrupt/ednwavlhd01%3A60020.1349814163628: CORRUPT block blk_-7209961989095415639

/hbase/.corrupt/ednwavlhd01%3A60020.1349814163628:  Under replicated blk_-7209961989095415639_403425. Target Replicas is 3 but found 1 replica(s).

Above blocks Corrupted in hbase when it replication. We have epochTime before :CORRUPT (1349814163628). See the date of the data corrupted and find out table belongs to this blocks and delete if not required.

To Delete the corrupted block run the below command

hadoop fsck -delete /

To move the corrupted block run the below command 

hadoop fsck -move /

-move option moves under /lost+found in hdfs partition. You may delete once move command move the files to this directory.

Wednesday, August 28, 2013

Deleting Hosts

You can stop Cloudera Manager from managing a host and the Hadoop daemons on the host.
First, make sure there are no roles running on the Host; you can decommission the host to ensure all roles are stopped.
Second, you must stop the Cloudera Manager Agent from running on the host; if you don't stop the Agent, it will send heartbeats to the Cloudera Manager Server and show up again in the list of hosts.
To delete a host:
  1. Decommission the host to ensure all roles on the host have been stopped. For instruction, see Decommissioning a Host.
  2. Stop the Agent on the host. For instructions, see Stopping and Restarting Cloudera Manager Agents.
  3. In the Cloudera Manager Admin Console, click the Hosts tab.
  4. Select the host in the Hosts tab.
  5. From the Actions for Selected menu, select Delete.

Reference : http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Free/4.5.1/Cloudera-Manager-Free-Edition-User-Guide/cmfeug_topic_6_7.html

Note : Atleast 1 hbase master should run while decommission  So, the region data will be swap to another region server. So, make sure hbase and other services running. Cloudera will take care of adding into excluding list safely

Also make sure the agents on cloudera machines were stopped else the host will be displayed automatically on cloudera


Saturday, June 22, 2013

GitHub - Source Code Control

Why GitHub ? 

Github is basically a version control system same as SVN (SubVersion) or CVS to maintain your code in the web UI. Very popular openSource component to keep our code. It has 2 feature Public or Private. Public projects will be viewed by anyone. You will find most of the openSource code published in GitHub. You can integrate gitHub easily with standard editors "Eclipse, Netbeans, IntelliJ.."

Latest Eclipse (Luno) release comes with GitHub plugin as in-build. You can checkout your source and check-in easily.

GitHub in Command Line In Interface on Ubuntu
Install Git in Ubuntu
$ sudo apt-get install git
How to Checkout from Git?
$ git clone https://github.com//.git

How to Check status on your directory with git master ?
$ cd ~/
$ git status -u origin master
Pull your changes from GitHub
$ cd ~/
$ git pull -u origin master
Push your changes into GitHub
$ git add 
$ git commit -m "Adding  into Git"
$ git push -u origin master
Add your existing project into Git
Login to your gitHub in web UI
Create a Repository in your github UI 

Push project from Terminal into your created Git Repository
$ cd ~/
$ git init
$ git add 
$ git commit -m "Adding projects into git" .
$ git remote add origin https://github.com/username/projectName.git
$ git pull -u origin master 
$ git push -u origin master

Check out GitHub project repository into Eclipse
  • Open Eclipse with workspace
  • Go to Window > Open Perspective > Other > Git > Ok
  • Right click and select 'Paste Respository Path or URI'
  • Copy and paste the Git URI from Git Web UI > HTTPS Clone URL
  • Move on with Next and finish to checkout repository into your workspace
  • Select Git Repository > Right Click 'Import Project' > Select your project into working directory > Finish
Convert Regular checkout project into Java Project in Eclipse
  • Select your project > right click > Properties > Project Facets > Click 'Convert to faceted form..'
  • Select 'Java' and 'Version' change in dropdown if you have multiple version
  • Click 'Apply' > Ok
Create sourceFolder (src/main/java) for your project in Eclipse
  • Select your project > right Click > Source Folder > FolderName 'src/main/java' > Finish 
  • Select your project > right Click > Source Folder > FolderName 'src/main/test' > Finish 
Push your changes into Git Repository through Eclipse
  • Open Eclipse with Workspace had checkout project from git. 
  • Select your project > right Click > Team > Add To Index (Need to add first)
  • Select your project > right Click > Team > Commit > provide commit message > commit and push > ok
Pull your changes into workspace from Git Repository through Eclipse 
  • Open Eclipse with Workspace had checkout project from git.  
  • Select your project > right Click > Team > Pull
Note : You can do synchronize with Repository through Team > Synchronize Workspace to sync and verify the pull or push changes before you do.
Try : Familiar yourself with Git in 5 mins

Friday, May 24, 2013

How to get remote ip address when data get post through LB (Load Balancer)

In tomcat under conf/server.xml. Update below code




           prefix="localhost_access_log." suffix=".txt"
           pattern="%{X-Forwarded-For}i %h %l %u %t "%r" %s %b"    
resolveHosts="false" />

         

Java Code

The following Java code extracts the originating IP address of an HttpServletRequest object.

public final class HTTPUtils {

    private static final String HEADER_X_FORWARDED_FOR =
        "X-FORWARDED-FOR";

    public static String remoteAddr(HttpServletRequest request) {
        String remoteAddr = request.getRemoteAddr();
        String x;
        if ((x = request.getHeader(HEADER_X_FORWARDED_FOR)) != null) {
            remoteAddr = x;
            int idx = remoteAddr.indexOf(',');
            if (idx > -1) {
                remoteAddr = remoteAddr.substring(0, idx);
            }
        }
        return remoteAddr;
    }

}

Tuesday, April 23, 2013

Squirrel - Java SQL Client

Why Squirrel ?

SQuirreL SQL is an open-source Java SQL Client program for any JDBC compliant database. Easy configurable by loading jars and can connect to various SQL Servers (MSSQL, MySQL, Sybase...)

You can learn more about Squirrel.

Explore Squirrel

1) Download squirrel-sql-3.4.0-install.jar

2) Run / Execute the jar from console : java -jar squirrel-sql-3.4.0-install.jar

3) Squirrel will get install in your desktop as squirrel plugin under $HOME directory. You  have uninstall script under the same directory to do so.

4) Open the Squirrel client go to Drivers and select the driver, Click modify and load the (Jars for Mysql or MsSQL)  under External class path. 

For MySQL :
mysql-connector.jar - class name : com.mysql.jdbc.Driver

For SQL Server 
sqljdbc4.jar - class name : com.microsoft.sqlserver.jdbc.SQLServerDriver

5) Go to alias and add your 'new alias' with and show your driver. You will be connected to the server and enjoy playing with your squirrel client on connecting to different database servers.

Friday, March 15, 2013

How to change your browser User Agent ?

user Agent switcher for chrome

What's user agent ?

Your browser sends its user agent to every website you connect to. Based on the browser and platform some websites serves the content. Always user agent information will send to web servers to notify.

Thursday, March 14, 2013

Cloudera Manager (SCM - Service and Configuration Management for BigData)


How to change the ips in cloudera manager

1) stop all the cloudera manager agents in all the clusters
2) stop the cloudera manager server and server db service in the scm server (namenode server)
3) Modify /etc/cloudera-scm-agent/config.ini to point to the new scm server ip (namenode server ip)
4) In /etc/hosts file add your new ip address and hostnames
5) Start all the services 1 by 1 from server-db, server, agent and you can see the ips got changed in the Host on cloudera

Adding a new role instances to your cloudera manager or adding a new host into your existing cluster.

https://ccp.cloudera.com/display/ENT/Adding+Role+Instances

Monday, March 4, 2013

Tools to be installed in windows

MobaxTerm - MobaxTerm is the excellent tool to connect to linux as SSH.

Wednesday, February 20, 2013

Web Services

Web services are client and server applications that communicate over the World Wide Web’s (WWW) HyperText Transfer Protocol (HTTP)

Web services can be combined in a loosely coupled way to achieve complex operations. Programs providing simple services can interact with each other to deliver sophisticated added-value services.

Types of Web Services
"Big" Web Services
"RESTful" Web services

Tuesday, February 19, 2013

Couple of design patterns or architectural pattern most people know already


“Design patterns are recurring solutions to design problems.”

Patterns: According to commonly known practices, there are 23 design patterns in Java. These patterns are grouped under three heads:
1. Creational Patterns
2. Structural Patterns
3. Behavioral Patterns



Intercepting Filter : Facilitates preprocessing and post-processing of a request.

View Helper : Encapsulates logic that is not related to presentation formatting into Helper components. 

Business Delegate :Reduces coupling between presentation-tier clients and business services. It hides the underlying implementation details of the business service, such as lookup and access details of the EJB architecture.

Data Access Object : Abstracts and encapsulate all access to the data source. The DAO manages the connection with the data source to obtain and store data.

Factory Pattern : The factory method pattern is an object-oriented creational design pattern to implement the concept of factories and deals with the problem of creating objects (products) without specifying the exact class of object that will be created. Inheritance also a factory pattern.

The factory pattern can be used when:
  • The creation of an object precludes its reuse without significant duplication of code.
  • The creation of an object requires access to information or resources that should not be contained within the composing class.
  • The lifetime management of the generated objects must be centralized to ensure a consistent behavior within the application.


Model–view–controller (MVC) is a software architecture pattern that separates the representation of information from the user's interaction with it.[1][2] The model consists of application data, business rules, logic, and functions. A view can be any output representation of data, such as a chart or a diagram. Multiple views of the same data are possible, such as a pie chart for management and a tabular view for accountants. The controller mediates input, converting it to commands for the model or view.[3] The central ideas behind MVC are code reusability and separation of concerns.[4]

Reference : http://www.allappforum.com/j2ee_design_patterns/j2ee_design_patterns.htm

Thursday, February 7, 2013

About Ganglia and steps to install in CentOS


Ganglia is a scalable distributed system monitor tool for high-performance computing systems such as clusters and grids. It allows the user to remotely view live or historical statistics (such as CPU load averages or network utilization) for all machines that are being monitored.[1]

It has 2 operations

Ganglia Meta Daemon (gmetad)
The meta node: one machine that receives all measurements and presents it to a client through a website.

Ganglia Monitoring Daemon (gmond)
The monitoring nodes: machines that run only the monitoring daemon and send the measurements to the meta node

Installation below will be in metanode and monitoring daemon: 

CentOS 6 and above :
1. Download : Add the EPEL (Extra Package Enterprise Linux) repository to your system. By downloading epel-release-6-8.noarch.rpm

2. Install rpm  

$ rpm -ivh epel-release-6-8.noarch.rpm
Note : Once its installed under /etc/yum.repos.d/ you can see epel repo. This repo contains ganglia packages to install.

MetaNode Machine (Ganglia Server)
Before you proceed, Make sure you downloaded and installed RPM following above mentioned steps.
$ yum install ganglia ganglia-gmetad ganglia-web ganglia-gmond
Note : ganglia-web is for web. Ganglia runs in Apache Webserver. It has php front-end. The message transfer in UDP (unified data protocol) XML.

Update Ganglia Client / Server configuration
$ vi /etc/ganglia/gmond.conf
cluster {
name = "my servers"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}

udp_send_channel {
host = collector.mycompany.com
port = 8649
}

udp_recv_channel {
port = 8649
}

tcp_accept_channel {
port = 8649
}

Note : This allows collector.mycompany.com to receive monitoring data from every node on port 8649(UDP). The cluster name and gmetad.conf datasource name should be same. Remove mcast_join and put host in gmond.conf

Update Apache Configuration - ServerName
Uncomment serverName in apache configuration and update.
$ vi /etc/httpd/conf/httpd.conf
ServerName example.com:80
Update Ganglia Server Configuration
Change ganglia Configuration in location tag from 'All' to 'Allow'
$ vi /etc/httpd/conf.d/ganglia.conf
Command to Start Ganglia & Gmond & apache Service in Ganglia Server
$ /etc/init.d/gmond start
$ /etc/init.d/gmetad start
$ /etc/init.d/httpd start

Steps to Install Ganglia Client Services in Clusters (Ganglia Client Nodes)
Before you proceed, Make sure you downloaded and installed RPM following above mentioned steps.
$ yum install ganglia ganglia-gmond
Update Ganglia Client (Gmond) Configuration
$ vi /etc/ganglia/gmond.conf 
cluster {
name = "PRODUCTION"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}

udp_send_channel {
host = collector.mycompany.com
port = 8649
}

/*udp_recv_channel {
}
tcp_accept_channel {
}*/
Note : This tells gmond to send the info to ganglia server (master) host ip(collector.mycompany.com) on port 8649(UDP). Cluster Name assigned value can be hostname or dataSourceName of ganglia server (Master). Remove udp_recv_channel and tcp_accept_channel from gmond configuration. Also, Remove mcast_join and put host in gmond.conf
Restart gmond service in ganglia client
$ service gmond restart
Access Ganglia Server through your browser using URL as configured: collector.mycompany.com/ganglia

Tuesday, January 22, 2013

Download Images and Videos from phone

Samsung Phones :

Download samsung kies software and install. You can import / export / back-up music, images, videos, contacts...

http://www.samsung.com/us/kies/

Export contacts with file extension .csv to open the contacts in excel

Nokia Phone

Download Nokia PC Suite software and install. You can import / export / back-up music, images, videos contacts...

http://www.nokia.com/global/support/nokia-pc-suite/

In File -> Store images will store images and videos in your desired location.
Nokia Video manager is also available to download only videos and import the same.


Thursday, January 10, 2013

eclipse tomcat cannot enter server name

Deleting below 2 files worked perfectly. paths are relative to the directory of your workspace:


./.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.jst.server.tomcat.core.prefs
./.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.wst.server.core.prefs

eclipse.ini Settings to make it work perfect. Update based on your memory. At-least 6GB on your system is good.
 
Update your eclipse.ini. Xms (memory small to memory max, Parrallel Garbage Collector..)
-XX:MaxPermSize=512m -Xms1024m -Xmx2048m
-XX:-UseParallelGC  -XX:+AggressiveOpts  -XX:-UseConcMarkSweepGC  -XX:+UseFastAccessorMethods
For svn
Just install subclipse
*) Tomcat opens the port and allows the user to access the WAR container services in the Web browser based on $CATALINA_HOME/conf/server.xml only 
*) Ex : In server.xml if you open the connector port 443 and 88 then you can access the WAR containers you placed in webapps using this both the port
*) The default url accessing port is 80. In default port 80 you can redirect port to 443 if you want user to access the portal in https.
*) ROOT/index.html - We can mention the url with name if you want user to redirect to specific url name when they access the webservice using specific system ip url. Below added in index.html.

Tuesday, January 8, 2013

Create a ShortCut to Ubuntu Desktop or Unity Dock

1) Install gnome-panel

sudo apt-get install gnome-panel

2) Open the Terminal and run below command to get a create launcher


   gnome-desktop-item-edit --create-new ~/.local/share/applications/

3) Provide the name, In command browse and select the "application" ex : eclipse (/home/gubs/eclipse/eclipse) executable. Click the icon and load the icon.

4) Browse folder ~/.local/share/applications

5) Drag and drop the created app into desktop or Unity Dock.
// Below script tag for SyntaxHighLighter