Tuesday 23 December 2014

AIX LPAR missing hdisk after vios reboot SOLVED

In case that you are doing routine checkup of your LPAR's on IBM pSeries, you probably are checking status of your LPAR OS disks or volume group from time to time.
To check status of your volume group hdisks use this

root@aix-server> [/]  lsvg -p roottvg
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0                   missing                    546               4                 00..00..00..00..04
hdisk1                   active                       546               0                 00..00..00..00..00

 
As you can see one of hdisk is missing! And you start to panic! "OMG, hdisk is missing, where, how, when?!?!"

There is no place for panic. You will see that one of your disks is missing only after you have restarted one of your VIOS. In are case there is two VIOS. hdisk0 is from first VIOS, hdisk1 is from second VIOS. These two hdisk is creating volume group called rootvg.

How to fix this missing hdisk state?
All you need to do is to activate.  

root@aix-server> [/] varyon rootvg

This will activate your volume group rootvg. After this you will see both of your hdisk as active!
Why this is important? Because of this:

When a volume group is activated, physical partitions are synchronized if they are not current.

But there is one case when you can't make your hdisk active without making additional changes! In this case, after you execute varyon command, error will be prompted and you won't be able to make your hdisk active!

root@aix-server> [/] varyon rootvg
varyonvg: Cannot varyon volume group with an active dump device on a missing physical volume. Use sysdumpdev to temporarily replace the dump device with /dev/sysdumpnull and try again.

So, as error said active dump device is on missing physical volume hdisk0.(I will not explaind here what system dump device is) How to change this? First we will list status of sysdump devices.

root@aix-server> [/]  sysdumpdev -l
primary              /dev/lg_dumplv
secondary            /dev/sysdumpnull
copy directory       /var/adm/ras
forced copy flag     TRUE
always allow dump    FALSE
dump compression     ON







From here we can see, that primary device is located on /dev/lg_dumplv and secondary device is /dev/sysdumpnull. In error message, active dump device is actually primary dump device in sysdumpdev -l. So we need to change that.

root@aix-server> [/] sysdumpdev -p /dev/sysdupmnull

List again sysdump devices.

root@aix-server> [/]  sysdumpdev -l
primary            
/dev/sysdumpnullsecondary            /dev/sysdumpnull
copy directory       /var/adm/ras
forced copy flag     TRUE
always allow dump    FALSE
dump compression     ON


Now execute activation of volume group.


root@aix-server> [/] varyon rootvg

root@aix-server> [/] 
root@aix-server> [/]  lsvg -p rootvg
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0                         active            546                    4             00..00..00..00..04
hdisk1                         active            546                    0             00..00..00..00..00


As you can see now, both hdisk are active now.
Now, change back you primary dump device

root@aix-server> [/] sysdumpdev -p /dev/lg_dumplv

Thursday 11 December 2014

RH6 and pdksh issue - SOLVED!

In case that you need to install pdksh package in your Red Hat 6, you will see something like this

# yum install pdksh
Loaded plugins: product-id, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Setting up Install Process
No package pdksh available.
Error: Nothing to do


Hm... strange isn't it?

Your issue is this

Not able to install pdksh on Red Hat Enterprise Linux 6

Explanation is really simple

The RHEL 6 provides mksh which is the advanced version of pdksh package. Install mksh instead pdksh

Solving it goes like this

#yum install mksh

Problem solved!

Tuesday 2 December 2014

mysql VALUES LESS THAN value must be strictly increasing for each partition SOLVED!

In case that you are doing reorganization of your mysql table (I talk about it in my previous posts) you may run in to this error message

ERROR 1493 (HY000) at line 1: VALUES LESS THAN value must be strictly increasing for each partition

When I reorganise partitions I like to do in from OS, something like this

server# mysql Syslog < reorganize.txt

where reorganise.txt is mysql commands that look like this

server# cat reorganise.txt
alter table SystemEvents reorganize partition p2014 into
 ( partition p20141201 values less than (to_days('2014-12-01')),
   partition p20141202 values less than (to_days('2014-12-02')),
   partition p20141203 values less than (to_days('2014-12-03')),
   partition p20141204 values less than (to_days('2014-12-04')),
   partition p20141205 values less than (to_days('2014-12-05')),
   partition p20141206 values less than (to_days('2014-12-06')),
   partition p20141207 values less than (to_days('2014-12-07')),
   partition p20141208 values less than (to_days('2014-12-08')),
   partition p20141209 values less than (to_days('2014-12-09')),
   partition p20141210 values less than (to_days('2014-10-10')),
   partition p20141211 values less than (to_days('2014-12-11')),
   partition p20141212 values less than (to_days('2014-12-12')),
   partition p20141213 values less than (to_days('2014-12-13')),
   partition p20141214 values less than (to_days('2014-12-14')),
   partition p20141215 values less than (to_days('2014-12-15')),
   partition p20141216 values less than (to_days('2014-12-16')),
   partition p20141217 values less than (to_days('2014-12-17')),
   partition p20141218 values less than (to_days('2014-12-18')),
   partition p20141219 values less than (to_days('2014-12-19')),
   partition p20141220 values less than (to_days('2014-12-20')),
   partition p20141221 values less than (to_days('2014-12-21')),
   partition p20141222 values less than (to_days('2014-12-22')),
   partition p20141223 values less than (to_days('2014-12-23')),
   partition p20141224 values less than (to_days('2014-12-24')),
   partition p20141225 values less than (to_days('2014-12-25')),
   partition p20141226 values less than (to_days('2014-12-26')),
   partition p20141227 values less than (to_days('2014-12-27')),
   partition p20141228 values less than (to_days('2014-12-28')),
   partition p20141229 values less than (to_days('2014-12-29')),
   partition p20141230 values less than (to_days('2014-12-30')),
   partition p2014 values less than (MAXVALUE));

It look good right? But when I execute it error message appears.

server# mysql Syslog < reorganize.txt
ERROR 1493 (HY000) at line 1: VALUES LESS THAN value must be strictly increasing for each partition

So I check again my reorganize.txt file. It looks good.
I could not see the error. But there was an error!
server# cat reorganize.txt
alter table SystemEvents reorganise partition p2014 into
 ( partition p20141201 values less than (to_days('2014-12-01')),
   partition p20141202 values less than (to_days('2014-12-02')),
   partition p20141203 values less than (to_days('2014-12-03')),
   partition p20141204 values less than (to_days('2014-12-04')),
   partition p20141205 values less than (to_days('2014-12-05')),
   partition p20141206 values less than (to_days('2014-12-06')),
   partition p20141207 values less than (to_days('2014-12-07')),
   partition p20141208 values less than (to_days('2014-12-08')),
   partition p20141209 values less than (to_days('2014-12-09')),
   partition p20141210 values less than (to_days('2014-10-10')),
   partition p20141211 values less than (to_days('2014-12-11')),
   partition p20141212 values less than (to_days('2014-12-12')),
   partition p20141213 values less than (to_days('2014-12-13')),
   partition p20141214 values less than (to_days('2014-12-14')),
   partition p20141215 values less than (to_days('2014-12-15')),
   partition p20141216 values less than (to_days('2014-12-16')),
   partition p20141217 values less than (to_days('2014-12-17')),
   partition p20141218 values less than (to_days('2014-12-18')),
   partition p20141219 values less than (to_days('2014-12-19')),
   partition p20141220 values less than (to_days('2014-12-20')),
   partition p20141221 values less than (to_days('2014-12-21')),
   partition p20141222 values less than (to_days('2014-12-22')),
   partition p20141223 values less than (to_days('2014-12-23')),
   partition p20141224 values less than (to_days('2014-12-24')),
   partition p20141225 values less than (to_days('2014-12-25')),
   partition p20141226 values less than (to_days('2014-12-26')),
   partition p20141227 values less than (to_days('2014-12-27')),
   partition p20141228 values less than (to_days('2014-12-28')),
   partition p20141229 values less than (to_days('2014-12-29')),
   partition p20141230 values less than (to_days('2014-12-30')),
   partition p2014 values less than (MAXVALUE));

When I change value from 2014-10-11 to 2014-12-11(as it should be in the first place) everything worked perfectly!!!

Problem was like error messages said that there was partition that did not have increasing value to_days from previous partition. If this could happen then it would be possible to have not time consistent values in your partition tables.

Tuesday 11 November 2014

sftp connection issue SOLVED and EXPLAINED Permission denied (publickey),Couldn't read packet: Connection reset by peer

You  created new system user for sftp files transfer. You followed steps for folder and file permissions and everything looks just fine BUT... when you try to connect through sftp you get this Permission denied (publickey),Couldn't read packet: Connection reset by peer error!

# sftp user1@x.x.x.x
Permission denied (publickey).
Couldn't read packet: Connection reset by peer
You have new mail in /var/mail/root

You check again is public key you exchanged OK, is it in right place, did you correctly named authorized_keys file, is file permission for sftp folder OK, etc...
But still same error.
In case you make your sftp connection more verbose

# sftp -vvv user1@x.x.x.x
.
.
.
Permission denied (publickey).
Couldn't read packet: Connection reset by peer

 Nothing there.

SFTP Permission denied (publickey).Couldn't read packet: Connection reset by peer SOLVED!!!

So what is the problem? Problem is that when you create user, you need to make password for that user on server!

On server:
x.x.x.x#passwd user1

On sftp client
#sftp user1@x.x.x.x
Connected to x.x.x.x
sftp>

As you can see, this is something that you can easily overlook and sftp will not work without this step!

Friday 31 October 2014

sftp user action logs - EXPLAINED and SOLVED

I already wrote how to setup and why to use SFTP.Time has passed and someone ask you to see did someone using sftp add/rename/download/upload/delete some file or folder. You go to /var/log and search for logs about sftp actions. And all that you can find is ...

ubuntu sshd[3510]: subsystem request for sftp by user boris
ubuntu sshd[3510]: pam_unix(sshd:session): session closed for user boris




So as you can see there is no log about what user connected by using sftp is doing. At some point in time someone will tell you that this is a security/where are these files/who delete files issue. So how to do this? It is quite simple if you understand how logs are created, chroot and off course if you are reading this!

Check ssh version first!

 First and most important thing to know. On older version of openssh there is very good chance that this will not work. Why, sftp-server dont have these options for older versions.
I tested this on
root@ubuntu:/var/log# cat /etc/issue
Ubuntu 12.04.4 LTS \n \l
root@ubuntu:/var/log# uname -a
Linux ubuntu 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:15:33 UTC 2013 i686 i686 i386 GNU/Linux
root@ubuntu:/var/log# dpkg -l openssh*
ii  openssh-client 1:5.9p1-5ubunt secure shell (SSH) client, for secure access
ii  openssh-server 1:5.9p1-5ubunt secure shell (SSH) server, for secure access


But on CentOs 5.8 this do not work
# cat /etc/issue
CentOS release 5.8 (Final)
# rpm -qa |grep ssh
openssh-4.3p2-82.el5
openssh-server-4.3p2-82.el5

Few thing to know about logs and chroot

As you know sftp users(in my case user boris) are all chrooted to directory configured in /etc/ssh/sshd_config with ChrootDirectory directive. In case that you do not know what this means, it means that once user is connected he cannot live this location. This user can't do anything outside this this folder. Read/write operations are only limited to his sftp folder. If you are wondering why I wrote 3 sentences about that use cannot leave chroot dir folder, wait just a bit more. Rsyslog or syslog capture events things by using socket /dev/log. This is important because sftp of features of ssh and ssh use rsyslog for logs storing then in /var/log. Permissions on /dev/log is  

# ls -la /dev/log
srw-rw-rw- 1 root root 0 Oct 31 12:54 /dev/log

So anybody that can approach to /dev/log can make logs by using rsyslog. But can sftp user approach to /dev/log? NO! Why? Because he is captured inside of his chrooted directory!!! So, idea is to still have chrooted user but that he can write in /dev/log.

Configuration in sshd_config

To enable logs for sftp-server we must fist enable it in sshd_config.  Change line

ForceCommand internal-sftp 
to
ForceCommand internal-sftp -l INFO -f AUTH

Option -l(small letter L) is for log level and option -f is for location of log. Do not for -f add location of file, this is done in rsyslog.conf.
After you make neccesery changes, restart ssh service.

#/etc/init.d/sshd restart

Accessing /dev/log from chrooted folder 

How to do this? My sftp folder is defined in sshd_config in ChrootDirectory /opt/sftp_test/%u directive and my sftp user is boris. Follow these steps!

#cd /opt/sftp_test/boris
#mkdir dev
#touch dev/log

#chmod 511 dev
#chattr +i dev
# mount --bind /dev/log dev/log

And this is it!!!

How to test if this is working?
Go to your AUTH log. Depending of Linux distro this can be in /var/log/secure or /var/log/auth.log.
In this test server I am using Ubuntu 12.04.
#tail -f /var/log/auth.log
Oct 31 15:28:30 ubuntu internal-sftp[2460]: session opened for local user boris from [x.x.x.x]
Oct 31 15:28:31 ubuntu internal-sftp[2460]: opendir "/"
Oct 31 15:28:31 ubuntu internal-sftp[2460]: closedir "/"
Oct 31 15:28:47 ubuntu internal-sftp[2460]: opendir "/boris"
Oct 31 15:28:47 ubuntu internal-sftp[2460]: closedir "/boris"
Oct 31 15:28:53 ubuntu internal-sftp[2460]: mkdir name "/boris/12/123" mode 0777
Oct 31 15:28:58 ubuntu internal-sftp[2460]: opendir "/boris/12/123"
Oct 31 15:28:58 ubuntu internal-sftp[2460]: closedir "/boris/12/123"
Oct 31 15:29:05 ubuntu internal-sftp[2460]: open "/boris/12/123/analy.jpg" flags WRITE,CREATE,TRUNCATE mode 0700
Oct 31 15:29:05 ubuntu internal-sftp[2460]: close "/boris/12/123/analy.jpg" bytes read 0 written 28925
Oct 31 15:29:19 ubuntu internal-sftp[2460]: session closed for local user boris from [x.x.x.x]
Oct 31 15:29:19 ubuntu sshd[2459]: Received disconnect from x.x.x.x: 11: disconnected by user
Oct 31 15:29:19 ubuntu sshd[2287]: pam_unix(sshd:session): session closed for user boris


As you can see  this is much better log!
If you have more then one user that is using sftp, you have to do this for every one of them!

In case that you want this to work after reboot. read this post fstab mount bind!


fstab mount bind error - SOLVED and EXPLAINED!

In case that you are using mount --bind option for something(in my case this was a must for sftp user action logs), you want this mount --bind survive reboot. So you will put this mount point in /etc/fstab.
First I mount it "by hand"

server:#mount --binb /dev/log /opt/sftp_test/boris/dev/log

This worked so I need it put make it mount automaticly after reboot.  
My fstab options was like this

/dev/log  /opt/sftp_test/boris/dev/log    none    bind

So, I test it if this works!
"False" test pass. Why I called it false test? I umount /opt/sftp_test/boris/dev/log and automaticly mounted it.

server:#umount /opt/sftp_test/boris/dev/log
server:#mount -a
server:#mount
/dev/mapper/ubuntu-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/sda1 on /boot type ext2 (rw)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)
/dev/log on /opt/sftp_test/boris/dev/log type none (rw,bind) 

 

Fstab mount bind error

Ok, now only true and real test! Reboot! As I was waiting for ping to start again so that I can ssh log to my test server, think that nothing can go wrong... Ping started to pass but I could not ssh connect. Ok,ssh service did not started yet.10 seconds, 20 seconds, 30 seconds... Ok, now I know that something is wrong! I looked at my VirtualBox server and there is was black stale screen.
fstab mount bind error solved explained
Error was:
The disk drive for /opt/sftp_test/boris/dev/log is not ready yet or not present.
So I just pressed S and boot process continued. This is why reboot is real test! :)
What can be reason for this? I suspect that reason is that mounting partitions from fstab is one of the first thing that OS during boot process do. There is still no /dev/log location at this point. Error message The disk drive for...  is pretty clear.
How to resolve this? In /etc/rc.local insert following line

mount --bind /dev/log /opt/sftp_test/boris/dev/log

Reboot!!!
root@ubuntu:/opt/sftp_test/boris/dev# reboot
root@ubuntu:/opt/sftp_test/boris/dev#
Broadcast message from root@ubuntu
        (/dev/pts/0) at 12:53 ...

The system is going down for reboot NOW!
login as: root
root@x.x.x.x's password:
Welcome to Ubuntu 12.04.4 LTS (GNU/Linux 3.5.0-23-generic i686)
Last login: Fri Oct 31 12:47:43 2014 from .x.x.x.x
root@ubuntu:~# mount
/dev/mapper/ubuntu-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/sda1 on /boot type ext2 (rw)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)
/dev/log on /opt/sftp_test/boris/dev/log type none (rw,bind)

Problem solved!

Wednesday 15 October 2014

Huawei U8650 sim unlock and cyanogenmod issue

A few days ago I wanted to insert another operator SIM card into my old HUAWEI U8650. I already wrote about rooting this phone and installing custom rom. I installed Cyanogenmod 7.2! I have to say that this is one of the best custom roms out there for this phone and generaly for older andoroid phones.

SIM Unlock - what does this means?


When you buy phone from telecom operator, SIM slot is locked to work only with that operator SIM cards. This way operator protect his investment because when you sigh contract for some time period that includes phone you usually get phone very cheap or for free. If you want to unlock SIM by yourself you would loose warranty, if you go to telecom operator they will tell you that they cannot SIM unlock your phone until contract expires! When you insert SIM card from other telecom operator, phone will report that SIM is not inserted and then ask for NETWORK UNLOCK CODE. You will be prompted to insert this code. By inserting SIM unlock code, you will unlock your phone to SIM cards from other telecom operators. You will not be able to use it until you unlock SIM protection.

How to get SIM unlock code?


Depending of policies of your telecom operator, sim unlock code you can get from your telecom operator for free or not when contract expires. If you lost your contract papers you can buy sim unlock code from ebay. SIM unlock code is generated by using phone IMEI and telecom operator "key". How to do this, I do not know but those information is only thing (besides money) that you need to get SIM unlock code! I takes a few days to receive code.


Cyanogenmod SIM unlock issue


In case you have already installed Cyanogenmod ROM, there is a problem.
This is from one forum:
"I had this problem. Cyanogenmod does not prompt you for an unlock code when putting in a different sim.

You have to flash back to a stock rom, and then put in your unlock code. Then flash back to Cyanogenmod.
"

What to do?


1. Find stock rom for your telephone! For Huawei U8650 here is stock custom rom!
2. http://sysadmin-tricks.blogspot.com/2013/12/huawei-u8650-rooting-and-downgrade-from.html
    part about rooting and downgrade
3.This will install, stock custom rom. When you insert SIM card from another telecom operator and turn on the phone, you will prompted with this
NETWORK UNLOCK CODE
Insert code.
You will be prompted that network is unlocked!

That is it! You phone is now SIM unlocked! 

Only bad thing about stock rom installing is that wipes up CWM recovery so if you want to install some custom rom, you have to first install CWM!




Tuesday 30 September 2014

EMC Networker skip folder and files settings on Linux - EXPLAINED and SOLVED

How to skip certain folder or files from being backup on Linux in case that you use EMC Networker as your backup software? You have to write simple text file that is located in / folder.
File name have to be .nsr  and syntax have to be like this

server#cat .nsr
<< /sys >>
+skip: *


This will skip all files and folder from /sys folder.

Friday 26 September 2014

How to search for pattern in blogger template xml file - quick way!

Ok, you are using blogger. And you love it! It is free, meaning that web hosting is free and domain name is free. It is pretty simple to change template or to add something. But... first time that you have to change something that is not predefined by google blogger team you will have to start using HTML editor that is embedded in blogger. Changes that you need to make are considering template that you use. If you are not HTML expert then you modification of template file goes something like this: 
step 1: find how to change what you need on Internet
step 2: make necessary changes in HTML code

Instruction on Internet goes like this: "Find this pattern and insert this after that pattern or delete that" But finding that pattern can be tricky because I had trouble using search in HTML editor because it did not work for whole xml file but just for part of file that you are can see in editor window. So finding that pattern goes something like this: search pattern, scroll down, search pattern, scroll down, etc. For instance, .xml template file for template that is used on this blog has 2222 lines and in editor window I can see only 23 lines. 
So how to quickly find pattern that you are looking for? I use this method. Backup your template .xml file. This will download xml file from blogger to your computer. Use any text editor to find in what line your pattern is in. I use Cygwin. Here is example. My pattern is <data:post.body/>. I look like this

$ grep -n '<data:post.body' template-3270296475170057061\(1\).xml
1772:            <data:post.body/>
1899:<div expr:id='&quot;summary&quot; + data:post.id'><data:post.body/></div>
1903:<b:if cond='data:blog.pageType == &quot;item&quot;'><data:post.body/></b:if>
1905:<b:if cond='data:blog.pageType == &quot;static_page&quot;'><data:post.body/></b:if>


So in blogger HTML editor I go to line 1772 and that insert or delete code that I need.



Friday 19 September 2014

HP Proliant Server power supply mismatch issue

In case that you insert by mistake power supply that is not for your HP Proliant server you will see "Power Supply mismatch" message. In case that you are wondering "how do you mean not for my server? But that power supply can fit inside my server!" Yes, it can fit but output power is not appropriate for you server. From my experience, there is 750W and 1100W power supply. In case that you mixed these two everything will work find until you need to reboot your system! And that during hardware check this error message will appear

 "Power Supply mismatch"

After this server will halt and there is nothing you can do. How to start your server then? Well if you have two power supply( one original from server and another from some other server that is not appropriate for server) take out power supply that is not from your server. After you do this, your server will pass hardware check and your system will boot but your power supply will not be redundant! 
 

How to add meta description to blogger blog

By default, there is no meta description option when you crate your blog in blogger. In fact, there is no option to enter meta description anywhere. What this means? It means that when you search for you blog in search engine, results that you see bellow URL will show blog name, description or last post. For this blog it will look like this. As you can see, there is blog name, part of blog description and small part of last post. Why this is bad? Because meta data is VERY, VERY important for search engines! When search engine start to search for phrase that you typed in, algorithm that is used, first compare url, metadata,etc. So if you do not have meta data on your blog and somebody else has and you have same phrase on both blog/website, his blog/website will showned first. And that is extremely important because lets be realistic - how many times you search for something and go on second or third page of Google search results?

Important thing to know about blogger. Metadata description is not the same as blog description. Blog description field is showned under blog name but considering search engine algorithm that field is not important.
Ok, how to enter metadata description on blogger blog? You have to do it manually by inserting HTML code in your template! How to do this? Go to template/edit template. HTML editor will appear. Find <head> </head> block. Anywhere in this block insert following line

 <meta content='DESCRIPTION' name='description'/>
 
 And that is that. It takes time for this changes to take place. Maine reason for this is update of search engines.
 
 
 


Monday 15 September 2014

External backlinks using forum signature - EXPLAINED!

If you want to earn money from ads on your web site, you have to have good visit! To have good visit, you have to have good SEO, external backlinks and internal backlinks for search engine to show your site on first page or second page. Higher your rank is chances that someone visit your site,blog or tube channel are higher!
One of best ways to build external backlinks are using forums that allow signature that can be link to your web page! What does this means? It means that every time you comment something on forum, below your comment there will be link to your web page!
How to insert that signature? It all depends of software that is used for forum. Most common way is by inserting BBcode in signature area.
Go to your profile settings or settings called profile on forum and in signature are insert this code

[url]http://yourwebsite.com[/url]

Click SAVE and that is it!
How this will look like? Body of your comment will look like this

comment
--------------------
http://yourwebsite.com

You can alter a bit your signature and use this code!
[url=http://yourwebsite.com]mywebsite[/url]

In this case your comment body will look like this

comment
--------------------
mywebsite

These days backlinks from forums are not worth much but any backlink is better that no backlink!



 

Wednesday 27 August 2014

ssh login to another server with no login using rsa keys

After you created rsa key, it is time to exchange them with server that you want to connect with no password.

Use this command

root@ubuntus1:~/.ssh# ssh-copy-id -i id_rsa.pub root@192.168.1.2
The authenticity of host '192.168.1.2 (192.168.1.2)' can't be established.
ECDSA key fingerprint is 48:ea:a4:f7:12:15:ca:f0:53:c6:66:44:e8:6b:30:8f.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter                                                                                         out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompt                                                                                        ed now it is to install the new keys
root@192.168.1.2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.1.2'"
and check to make sure that only the key(s) you wanted were added
.


Try to connect!

root@ubuntus1:~/.ssh# ssh 192.168.1.2
Welcome to Ubuntu 12.04.4 LTS (GNU/Linux 3.5.0-23-generic i686)

 * Documentation:  https://help.ubuntu.com/

  System information as of Wed Aug 27 08:46:14 CEST 2014

  System load:  0.13              Processes:           80
  Usage of /:   65.3% of 2.71GB   Users logged in:     1
  Memory usage: 47%               IP address for eth1: 192.168.1.2
  Swap usage:   1%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

58 packages can be updated.
44 updates are security updates.

New release '14.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

For more information, please see:
http://wiki.ubuntu.com/1204_HWE_EOL

To upgrade to a supported (or longer supported) configuration:

* Upgrade from Ubuntu 12.04 LTS to Ubuntu 14.04 LTS by running:
sudo do-release-upgrade

OR

* Install a newer HWE version by running:
sudo apt-get install linux-generic-lts-trusty linux-image-generic-lts-trusty

and reboot your system.

Last login: Tue Aug 26 14:50:48 2014 from 192.168.1.5
root@ubuntu:~# exit
logout
Connection to 192.168.1.2 closed.
root@ubuntus1:~/.ssh#


As you can see, this is very simple and easy.

ssh keygen create rsa key

In case that you want to create rsa ssh keys use following command

root@ubuntus1:~/.ssh# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
96:03:9f:05:26:c5:e6:72:30:fc:85:61:41:8a:c8:34 root@ubuntus1
The key's randomart image is:
+--[ RSA 2048]----+
|  E  ..+O+       |
| o o .+=+..      |
|  o . o* ..      |
|      .o++       |
|       oS        |
|       . .       |
|                 |
|                 |
|                 |
+-----------------+

root@ubuntus1:~/.ssh# ls
id_rsa  id_rsa.pub
 

Key has two parts: public(id_rsa.pub) and private (private). You create rsa ssh keys in case that you want to exchange keys with other servers so that you can login with no password. You only exchange public part of key.

Thursday 14 August 2014

change alf_data location Alfresco - SOLVED

By default alf_data location is in alfresco_home folder. In case that you want to change ald_data location to have to find it first.
For this to change there are few thing that you need to know. In alfresco there are few location that you can change things and in some cases official documentation is just not good enough.
So, how to change alf_data location? Directive that you need to find is called dir.root. 
This is located in alfresco_home/tomcat/shared/classes/alfresco-global.properties.
By default dir.root is


dir.root=/opt/Alfresco/alf_data 

To change location simply do this
#dir.root=/opt/Alfresco/alf_data
dir.root=new_location/alf_data

For this to work alf_data folder has to be owned by alfresco user and alfresco group.

server#cd new_location
server#chown -R alfresco:alfresco alf_data

Restart application after you are finished. In case that you did something wrong, alfresco application will not start. In this case, check alfresco.log located in alfresco_home.


Alfresco memory settings

Depending of Alfresco version java memory settings for Alfresco are located in different files and locations.

For older versions Alfresco version these settings are located in Alfresco home folder in file alfresco_home/alfresco.sh. Line look something like this

export JAVA_OPTS='-Xms256m -Xmx1024m -XX:MaxPermSize=512m -server'

So every time you start alfresco, these java memory settings will be applied!

In newer versions of Alfresco java memory setting are in alfresco_home//tomcat/scripts/ctl.sh
 in line

 export JAVA_OPTS="-XX:MaxPermSize=512m -Xms256m -Xmx1024m ..

Each memory change demands restart of alfresco application.
In case that you can't find memory settings, easiest way to find it is

server#cd alfresco_home
server#grep -r PermSize *
# grep -r 'PermSize' *
alfresco.sh:export JAVA_OPTS='Xms256m -Xmx1024m -XX:MaxPermSize=512m -server'



Oracle Glasshfish memory setings

In case that you want to change default Oracle glassfish java memory settings, you should do following thing. Find domain.xml file in you glassfish domain folder. Location of domain.xml is something like this glassfish_home/glassfish/domains/domain1/config/ . In case that you can't find domain.xml file use find command to use it.

server#cd glassfish_home
server#find . -name domain.xml
./glassfish/domains/domain1/config/domain.xml

In this file find this section:

<jvm-options>-XX:PermSize=256m</jvm-options>
        <jvm-options>-Xmx1024m</jvm-options>
        <jvm-options>-Dgosh.args=--nointeractive</jvm-options>
        <jvm-options>-Djavax.management.builder.initial=com.sun.enterprise.v3.admin.AppServerMBeanServerBuilder</jvm-options>
        <jvm-options>-Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as</jvm-options>
        <jvm-options>-XX:MaxPermSize=512m</jvm-options>

Memory settings are changed in these lines:

<jvm-options>-XX:PermSize=256m</jvm-options>
<jvm-options>-Xmx1024m</jvm-options>
<jvm-options>-XX:MaxPermSize=512m</jvm-options>
   

We won't be talking about what there parameters means.
After you change these settings you have to restart your oracle glassfish domain
 server#./asadmin stop-domain domain1
 server#./asadmin start-domain domain1



Tuesday 12 August 2014

How to monitor Glassfish memory usage on zabbix - SOLVED!

In case that you need monitoring of your glassfish application server java memory consumption you can use asadmin get directive. With this you can get all necessary information about glassfish resources. How to get only java memory information?

server#./asadmin get -m server.jvm.memory.*heap*
server.jvm.memory.committedheapsize-count-count = 2428174336
server.jvm.memory.committedheapsize-count-description = Amount of memory in bytes that is committed for the Java virtual machine to use
server.jvm.memory.committedheapsize-count-lastsampletime = 1407835165699
.

.
.

This will give you all memory information that you need but this is maybe to much. Usually all you want is committed heap size, used heap size and max heap size.

server#./asadmin get -m server.jvm.memory.*heap* |grep count-count
server.jvm.memory.committedheapsize-count-count = 2428174336
server.jvm.memory.committednonheapsize-count-count = 550043648
server.jvm.memory.initheapsize-count-count = 62795648
server.jvm.memory.initnonheapsize-count-count = 539426816
server.jvm.memory.maxheapsize-count-count = 2863333376
server.jvm.memory.maxnonheapsize-count-count = 1124073472
server.jvm.memory.usedheapsize-count-count = 200077448
server.jvm.memory.usednonheapsize-count-count = 121870448


From here you can see all information that you need!
From here you can grep information that you need and add them in zabbix agent and zabbix server.

Bash History: Display Date And Time For Each Command - Solved

Multiple users are using same OS account and you want to know who done this and who done that? Or you know that you done something on that day but you can remember what? Or you just want to know what you did on some specific day? Linux command that shows OS users previous typed command is history. But depending on Linux distro Linux command history can be useful to you or not for  this time execution command investigation.

In most Linux distros date and time are not showned in history. From my experience only SUSE SLES 11 has enabled this by default.

So lets see what is "problem"!

server#history
.
.
596  ls -la
597  cd ..
598  ls
599  yum info openssh
600  ifconfig
601  history
602  cat /etc/issue
603  history

So how to add time and date in history?
Because most users use bash shell you shoud add following line

export HISTTIMEFORMAT="%d.%m.%y. %T " 

in /etc/bashrc for  Red Hat, Fedora and CentOS distros or /etc/bash.bashrc for SUSE and Ubuntu distros. This will add in enviroment variables variable HISSTIMEFORMAT and this as result will add date and time for every typed command in bash shell. In case that user use some other shell like zsh or ksh add same line in zshrc and kshrc. Considering date and time format you should use format that you think that is best for you. Find right time format by using date command. For example time format that I use is

server# date "+%d.%m.%y. %T"
11.08.14. 21:25:08

I hope you get the point.

There is no need for restart of any service or something like that. Next time when you log in history will recorded with date and time and will look something like this:

server#history
 1018  11.08.14. 18:18:34 ls
 1019  11.08.14. 18:18:37 du . -h
 1020  11.08.14. 18:18:40 ls -lh
 1021  11.08.14. 18:18:52 history

Sunday 10 August 2014

memory leak Java java.lang.OutOfMemoryError: Java heap space alarm on zabbix

What to do when your java virtual machine is out if memory, when you java virtual machine has a memory leak? How to get alarm that java virtual machine has this issue? When user  calls you up that java application do not work, you will easily find java heap: out of memory message in java log. But how to get alarm when memory leak starts and out of memory java message starts to show in you log? I you use zabbix as you monitoring system then this is easy.Write a simples script that will loo for "out of memory" pattern in last 10 or t 20 lines of java virtual machine log file. When this pattern starts to show in last 10 or 20 lines,this will trigger alarm on zabbix server and that will send mail or sms to you!

Ok, script should look like this.
server#cat java_out.sh
#!/bin/bash
LOG='java_log_file'
OUT='/tmp/java_out_of_memory.txt'
a=`tail -10 $LOG|grep 'java.lang.OutOfMemoryError: Java heap space'|wc -l`
if [ a = = 0 ]
then
echo 0>$OUT
else
echo 1>$OUT
fi

In line where LOG is defined put location of your java application log.
Put this script in cron so that it triggers every minute.


* * * * * java_out.sh


In you zabbix_agent.conf insert line where you will define this new data that will be send to zabbix server. It should look something like this

UserParameter java_out_of_memory, cat /tmp/java_out_of_memory.txt|awk '{print $1}'

Restart zabbix agent.
server#/etc/init.d/zabbix_agent stop
server#/etc/init.d/zabbix_agent start

How to check if zabbix server gets java out of memory item that you created? On zabbix server execute this

zabbix_server#zabbix_get -s IP_of_server -k java_out_of_memory

If exit is 0, there is no problem with java memory leak. If exit is 1, you have memory leak because somewhere in last 10 lines of java log files you have java heap: out of memory  message.

Now through web interface insert in zabbix server java_out_of_memory item! With this you will have time diagram of how often memory leak has happen.


Friday 8 August 2014

bash script syntax error near unexpected token `else'

In case that you have this error during execution of your bash script, there is issue in your if statement.
Lets see how this works on example. We have simple bash script called 123.sh.

server~# cat 123.sh
#!/bin/bash

a=`ls|wc -l`
if [ $a == 0 ] then
echo 0
else
echo 1
fi


server:~# ./123.sh
./123.sh: line 5: syntax error near unexpected token `else'
./123.sh: line 5: `else'




As you can see, bash is telling as that we have syntax error in line 5. But if you look in line 5 there are just else command and there is no way this is wrongly written. So problem has to be somewhere above.
For this particular error problem is in location of then. In line where is if statement we have then statement  and this is reason why this don't work. Syntax should be like this

if [  conditions ]
then
.
.
else
.
.
fi

So if we change that like, script will work!
 server:~# cat 123.sh
#!/bin/bash
a=`ls|wc -l`
if [ $a == 0 ]
then
echo 0
else
echo 1
fi

server:~# ./123.sh
1



Thursday 31 July 2014

grep in all files in all subdirectories

You want to find some line or phrase but you do know exact location?

Use grep!

Syntax is simple!

server#grep -r fraze /dir

This -r option means recursive so depending of your current location this will grep in all files in all subdirectories!


how to fast drop partition table in mysql - EXPLAINED!

So your file system is full (or will be very soon) and there are serious possibility that your mysql database will stop. You have to drop some old partitions.
There are two ways to do it.

First way - slow way


server#mysql
# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
.

.
mysql> use database_name;
Database changed
mysql> alter table table_name drop partition partition_name;


Remember that partiotion_name has to be name of oldest partition so when you drop that partition consistency of data stored in database will be preserved.
In my case this look like this


mysql> alter table SystemEvents drop partition p20140701;
Query OK, 0 rows affected (0.23 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql>

In case that you have more then few partition do drop, you can drop more partitions in one mysql row or my can drop partition one by one. This is up to you. This can be quite boring and time consuming.

Different way - fast way

This way require just list text manipulation skill.
As you know you can execute mysql command by just redirecting conf file in mysql. Syntax goes like this

mysql database_name < conf_file

So how to write that file?

1.List partition that you want to drop.
In my case this look like this

server# ls /var/lib/mysql/Syslog/|grep 201407|grep MYI |awk -F# '{print $3}'|awk -F. '{print $1}' >list.txt
server#cat list
p20140702
p20140703
p20140704
.
.
p20140728
p20140729
p20140730

2.Add mysql command in first line of that file

alter table SystemEvents drop partition p20140702
p20140703
p20140704
.
.
p20140728
p20140729
p20140730

3.And coma (,) as last character in every  line.

server#awk '{print $0","}' line.txt >line_1.txt

4.Close mysql syntax by adding ; at the end of last line.

Your file should look like this

server#cat line_1.txt
alter table SystemEvents drop partition p20140702,
p20140703,
p20140704,
.
.
p20140728,
p20140729,
p20140730;


5. Lets drop partition

server#mysql Syslog <line_1.txt
server#

And that is that!



Wednesday 30 July 2014

Iso repository on Red Hat or Centos - EXPLAINED!

In case that you want to install some new or update already installed packages, you can do it by using rpm command and have possible problems considering dependency issues or you can do it by using yum and avoid dependency issue because yum will do all the work for you!

In case that your server do not have access to internet or you do not have central repository server you can always use iso image and add it to your yum repository.

How to do this?

1. Copy iso image to your server
2. Mount iso image

server#mount -t iso9660 /iso_location /mount_point -o loop

In my case

server#mount -t iso9660 rhel-server-5.9-x86_64-dvd.iso /mnt -o loop

Check if it is mounted

server# mount |grep mnt
/opt/iso/rhel-server-6.3-i386-dvd.iso on /mnt type iso9660 (rw,loop=/dev/loop0)

3. Go to yum configuration folder

server#cd /etc/yum.repos.d

Create new conf file. Extension has to be .repo. 

server# cat dvd.repo
[DVD]
name=RHEL $releasever - DVD
baseurl=file:///mnt/
enabled=1
gpgcheck=0
 

First line in name of repository and you have to have this line.
Second line is just for user to know what repository he is using. This line is not necessary but it is nice to
Third line is location of your mounted iso.
Fourth line is for enabling repository. 1 is for enable,0 is for disable
Fifth line is for gpg signature check. Because I am using iso from vendor I disabled this.

You can add more options to this file if you like but this is I think simplest configuration you can get.

4.  Insert new repository in yum
# yum repolist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
Updating certificate-based repositories.
Unable to read consumer identity
repo id                        repo name                                  status
DVD                            RHEL 6Server - DVD                         2,797
repolist: 2,797
 


5. Test if it is working.Try to install some new package

server# yum install yp-tools
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
Updating certificate-based repositories.
Unable to read consumer identity
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package yp-tools.i686 0:2.9-12.el6 will be installed
--> Processing Dependency: ypbind for package: yp-tools-2.9-12.el6.i686
--> Running transaction check
---> Package ypbind.i686 3:1.20.4-29.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package           Arch          Version                     Repository    Size
================================================================================
Installing:
 yp-tools          i686          2.9-12.el6                  DVD           64 k
Installing for dependencies:
 ypbind            i686          3:1.20.4-29.el6             DVD           51 k

Transaction Summary
================================================================================
Install       2 Package(s)

Total download size: 115 k
Installed size: 241 k
Is this ok [y/N]: n
Exiting on user Command

server#



As you can see, you can install new package from DVD repository.

And that is it!


Friday 25 July 2014

custom script to speed up mount iso - EXPLAINED

I read once that good system admin will write a script for something that he has to do more than few times. Especially if you have to write down long command lines for something that is not very important but it is still time consuming.

So...
Very often I have to mount iso files on file system. Command line for this is

server#mount -t iso9660 /location_of_iso /mount_point -o loop

This is not very long but after some time even this can be very annoying.
So I decided to speed things up.

I wrote simple script called mount_iso. I look like this

server# cat mount_iso
#!/bin/bash
ISO="$1"
echo $ISO
mount -t iso9660 $ISO /mnt/ -o loop


Make it executable.
server#chmod +x mount_iso

In is good to move this script in folder from where you call other scripts and programs to run.

server# env |grep PATH
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin


I moved my to /bin.


How it works?

server# mount_iso rhel-server-6.3-i386-dvd.iso
rhel-server-6.3-i386-dvd.iso
server# mount |grep mnt
/opt/iso/rhel-server-6.3-i386-dvd.iso on /mnt type iso9660 (rw,loop=/dev/loop0)

I predefined mount point to be /mnt just because this is convenient for me.

This only saves few seconds of my time but it I find it very useful.



Wednesday 23 July 2014

How to add date to logrotate files- EXPLAINED!

By default, when log files are rotated they are first renamed and after compressed if this option is enabled.
Those moved files get extension in numbers so depending of how many rotation cycles you have it looks something like this

server# pwd
/var/log
server# ls |grep messages
messages
messages.1
messages.2
messages.3
messages.4


In case that you want to change these number to date expression you have to add this line to /etc/logrotate.conf

.
.
dateext
.
.

On next rotation cycle your log files will look like this

.
.
messages-20140209
messages-20140216
messages-20140223
.
.

logrotate change from gzip to bzip2 compression - EXPAINED

Default compression of log files in logrotate is gzip. In case that you want more efficient compression, you can switch gzip compression with bzip2 compression.

How to do this?


In /etc/logrotate.conf file you settings should look like this

.
.
#compress
compresscmd /usr/bin/bzip2
.
.

On next logrotate cycle your files will be compressed with bzip2.

How to compress logs with logrotate?

By default, logs created in /var/log/  are rotated and depending of Linux distro that you use are compressed or not. Depending on how frequent rotation is your log files can be quite big. To avoid file system digestion, it is wise to compress old log files during rotation cycle.

To enable compression of log files in /etc/logrotate.conf uncomment compress statement

.
.
# uncomment this if you want your log files compressed
compress
.
.

There is no need for restart of any service. During next logrotate cycle, new configuration will be executed. If you can not wait execute it manually

server#logrotate /etc/logrotate.conf

 and compressed logs will appear in /var/log folder. 


Wednesday 9 July 2014

Adding new LUN with multipath on Linux

So you have need to add another LUN to your server. Here is procedure how to do it if you use multipath on your server!

First your storage admin has to assign new LUN to you server. You may see this in you log when LUN is added from storage side

kernel: sd 1:0:0:10: [sdbb] Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
After this is done, you can proceed.
List your current multipath status

server# multipath -ll
|-+- policy='round-robin 0' prio=4 status=active
| |- 1:0:1:9  sdau 66:224 active ready running
| `- 2:0:1:9  sdan 66:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 1:0:0:9  sdk  8:160  active ready running
  `- 2:0:0:9  sdai 66:32  active ready running

Go to /sys/class/fc_host folder.

server# cd /sys/class/fc_host/
server:/sys/class/fc_host # ls
host1  host2 


From multipath -ll you can see that we are using host1 and host2 (this is important in case that you have more than two fc single port card or some dual port card- every fc port will be represented as hostX). In this case we have two single port cards and ports are represented as host1 and host2.

First we would like to restart port on host1!

server#cd host1
server#echo 1 >issue_lip

In you /var/log/messages something like this should appear

kernel: qla2xxx [0000:0a:00.7]-801c:5: Abort command issued xxxxx 
kernel: scsi 1:0:0:10: Direct-Access     DGC      VRAID            0532 PQ: 0 ANSI: 4
kernel: scsi 1:0:0:10: alua: supports implicit and explicit TPGS
This means FC port detected newly added LUN.

Now rescan scsi host host1!

server# cd /sys/class/scsi_host/host1
server# echo "- - -" >scan

This will add that new LUN to the system as scsi devices!
In your log something like this should appear

.
sd 1:0:0:10: [sdbb] Attached SCSI disk
.
.

Now if you check your multipath
server#multipath -ll
|-+- policy='round-robin 0' prio=4 status=active
| |- 1:0:1:9  sdau 66:224 active ready running
| `- 2:0:1:9  sdan 66:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 1:0:0:9  sdk  8:160  active ready running
  `- 2:0:0:9  sdai 66:32  active ready running




|-+- policy='round-robin 0' prio=4 status=active
| `- 1:0:1:10  sdbb 67:80 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 1:0:0:10  sdbb 67:16  active ready running



As you can see old LUN has 4 paths and new LUN has only 2 paths. Other two paths will appear when you do rescan on host2.

Don't be confused on my server I see 4 paths for every LUN because I have two FC ports connected to two FC switch. From single FC switch to storage, we have two connection because of redundancy. So in total, we have  4 paths from server to storage.


Tuesday 8 July 2014

crontab deleted accidently - how to restore it EXPLAINED!

Well, accidents do happen! Usually, when they do happened you are not prepared for them.

One of those accidents in Linux sysadmin world is definitely erasing crontab. You did not have to do it. But you have to restore it!

Ok, few things you need to know first!
1.Crontab is scheduler you users. Every user has it's own crontab. One user can not change crontabs of another user (except if root gave him permission)

# whoami
test
# crontab -e -u test1
must be privileged to use -u


But user can delete it's own crontab. How? Well that is not subject of this post but usually reason is lack of knowledge. By this I think of this

#crontab -r

This will remove your crontab. And you can't do nothing about it. Point is - if you do not know what you are doing then DON'T do it! Read first about it or use some test server,etc.

By the time user calls you to say that his crontab is missing, damage is already done. And they will call you.

2. When any user create crontab, file is generated with that crontab. So, when you edit or list your crontab, you actually reading that file. Depending of your Linux distro, end location may wary but
they are all located in /var/spool/cron/ folder ( on Ubuntu location is /var/spool/cron/crontabs, on Sles11 location is /var/spool/cron/tabs, on Centos location is /var/spool/cron). So when you list this folder you will see something like this

# ls
root  test  test1

This means that users root, test and test1 have crontabs! If you read these files you will see that they are same as crontab that you edit!
Let's delete crontab for user test1!

$ whoami
test1
$ crontab -r
$ crontab -l
no crontab for test1

List
# ls
root  test

And now.... restore of crontab! If you don't have backup of this folder/files/OS, they you can't restore crontab for that perticular user (in are case test1)! If this is the case, then your user has to write down crontab again. This can be a very big problem because at this point all of your scripts that were running throught crontab will not run on that particular time. This is just small example why you (if you are serious about administration) need to have backup!
 
If you do have backup, just restore file to crontab location!

# cp /root/cron123 test1
root@ubuntu:/var/spool/cron/crontabs# ls
root  test  test1
root@ubuntu:/var/spool/cron/crontabs# crontab -l -u test1
* * * * * date


In case that user accuse you that you deleted his crontab file(because root can edit/delete other user crontabs), depending of log level in your system log, you will have notification of crontab actions!

Jul  8 09:20:26 ubuntu crontab[22165]: (test1) LIST (test1)
Jul  8 09:20:36 ubuntu crontab[22167]: (test1) DELETE (test1)
Jul  8 09:20:45 ubuntu crontab[22168]: (test1) LIST (test1)





Thursday 3 July 2014

How to show adsence ads in archive posts!

I already explained how to insert adsence ads before post text but after post title!

My friend recently told me that I do not have ads in archive article! When I looked in to it, that was really true!

Hm...
Ads were showned normally on home page! When go to archive and click on month, posts that are showned have ads! But when I click on single post in archive, ads were missing!

So what was the problem?

Problem is itemprop! Because I put my adsence code in section where itemprop = articleBody
and this only shows that code in home page!

<data:post.body/>
        <div style='clear: both;'/> <!-- clear for photos floats -->
      </div>
    <b:else/>
      <div class='post-body entry-content' expr:id='&quot;post-body-&quot; + data:post.id' itemprop='articleBody'>
        <div style='float: left; margin: 10px 10px 10px 0;'>
    &lt;script async src=&quot;//pagead2.googlesyndication.com ...
.
.
</div>

I insert same code in section where itemprop='description articleBody' and now I have ads showned in archive post!

<div class='post-body entry-content' expr:id='&quot;post-body-&quot; + data:post.id' itemprop='description articleBody'>
           <div style='float: left; margin: 10px 10px 10px 0;'>
    &lt;script async src=&quot;//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js&quot;&gt;&lt;/script&gt;
.
.
</div>


As you can see, only difference is description!

 

Tuesday 1 July 2014

How to run a Linux script every few seconds under cron - EXPLAINDED and SOLVED!

Crontab is a scheduler for Linux that run scripts on certain time of day, week or month!

It has 5 fields for setting that!

_  _  _  _  _  script.sh

First field is for minutes.(00-59)
Second field is for hours.(00-23)
Third field is for day in a month(1-31)
Forth field is for month in a year (1-12)
Fifth field is for day in a week (0-6, where 0 is sunday)

If you have star(it is joker sigh after all) is some field that means that all numbers is included!
So

* * * * * script.sh

means that script.sh will execute every minute, every hour, every day in a month, every day in a week and every month in a year.
 
Minimum time resolution that you can use for cron is one minute!  Script will be executed in 00 seconds on particular time. But what if you had do execute your script on 20th second of every minute or every 20 seconds?
From cron you cannot resolve this, because as we said - minimum time resolution is one minute. (Resolution is smallest step between two neighbour points)

In case that you that "/number of seconds" will resolve your issue, you are wrong. This / does not mean divide like in math. It means repeat every number of minutes, hours,etc. depending in what field it is standing.

If you have something like this

*/5 * * * * script.sh

this will execute script.sh every 5 minutes.

or

* */5 * * * script.sh

will execute script every 5 hours!
You get the point!

Ok, are problem needs different approach!

If you want script.sh to be executed every 15 seconds, do following thing!

server#cp script.sh script15.sh
server#cp script.sh script30.sh
server#cp script.sh script45.sh 

In script15.sh enter following line

#!/bin/bash
sleep 15
.
.
 

before commands in script!
For script30.sh enter sleep 30, for script 45.sh enter sleep 45.
Sleep command delay everything for certain amount of time in seconds!

sleep - delay for a specified amount of time

Now in crontab

* * * * * script.sh
* * * * * script15.sh
* * * * * script30.sh
* * * * * script45.sh
So, all scripts will be executed in same time but because of sleep command inside of then they will be delayed for specific amount of seconds!




Monday 30 June 2014

How to install vmware-tools on Linux

Here is procedure ho to install vmware-tools on Linux VM that are running on vmware platform!

Your VM administrator must mount vmware tools iso to your vm machine!
When he do that, then you can procede!

Log in to your Linux VM server. You have to be root to install this!

First mount that vmware tools CD

server# mount /dev/cdrom-hdc /mnt/
mount: block device /dev/cdrom-hdc is write-protected, mounting read-only
 server#cd /mnt/
server#ls
VMwareTools-8.3.2-257589.tar.gz

Because this is read-only files system you have to untar this file somewhere where you can write!
 

server#cp VMwareTools-8.3.2-257589.tar.gz /opt
server#cd /opt
server#tar xzvf VMwareTools-8.3.2-257589.tar.gz
server#ls
VMwareTools-8.3.2-257589.tar.gz  vmware-tools-distrib
server# cd vmware-tools-distrib/
server# ls
bin  doc  etc  FILES  INSTALL  installer  lib  vmware-install.pl
server#./vmware-install.pl

Follow the procedure! There is little or no interaction! Just press Enter!
After installation is finished, new process will apear in /etc/init.d!

After you finished installation you can delete these files and unmount /mnt and that cdrom!

For vmware platform to be aware of installed vmware-tools  on some VM, on Linux machines, reboot is needed! Because of this, it is very wise to install vmware-tools during installation of OS!

Free inode and glassfish - EXPLAINED!

Recently I had issue considering file system and glassfish!

Users were complaining that glassfish application stoped working! I logged on server and check glassfish server.log!

server#cd /opt/glassfish3/glassfish/domains/domain1/logs
server# tail server.log

server.log_2013-03-05T11-03-08:[#|2013-01-03T12:16:01.092+0100|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=23;_ThreadName=Thread-2;|WARNING: 8 bytes remaining but no space left|#]
server.log_2013-03-05T11-03-08:[#|2013-01-03T12:16:01.092+0100|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=23;_ThreadName=Thread-2;|WARNING: 300 bytes remaining but no space left|#]
server.log_2013-03-05T11-03-08:[#|2013-01-03T12:16:01.092+0100|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=23;_ThreadName=Thread-2;|WARNING: 300 bytes remaining but no space left|#]
server.log_2013-03-05T11-03-08:[#|2013-01-03T12:16:01.094+0100|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=23;_ThreadName=Thread-2;|WARNING: 8 bytes remaining but no space left|#]
server.log_2013-03-05T11-03-08:[#|2013-01-03T12:16:01.094+0100|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=23;_ThreadName=Thread-2;|WARNING: 8 bytes remaining but no space left|#]

Ok, no space left on that partition! But when I checked disk free space, there was space left on all partitions!

server# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv
                      9.2G  1.8G  7.0G  21% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
/dev/sda1              97M   32M   60M  35% /boot
/dev/mapper/rootvg-optlv
                       14G  1.6G   12G  12% /opt
/dev/mapper/rootvg-varlv
                      4.9G  282M  4.4G   7% /var







So there was free space left, but from some reason glassfish could not write on file system. I checked if I can write anything on file system. It turns out that I can write on all file system except on file system where glasshfish was mounted!
 
Hm....

After some thinking I desided to check status of inodes on partitions!

server# df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/rootvg-rootlv
                      612000   55749  556251   10% /
tmpfs                 490591       1  490590    1% /dev/shm
/dev/sda1              25688      39   25649    1% /boot
/dev/mapper/rootvg-optlv
                      897600   0  897600    100% /opt
/dev/mapper/rootvg-varlv
                      321280    2267  319013    1% /var


Ok, so I found the problem! There were no free inodes left. How many inodes you have on partition depends on size of that partition and what file system you use!

What is inode?
" Inode is a data structure used to represent a filesystem object, which can be one of various things including a file or a directory. "

 Bassicly, number of inodes on files system defines max number of files on directories together!

So, I had close to 900 000 files or folders on my /opt partition! Because file system considering space was not full, there was to be some folder with many small size files!

I found my problem in this folder /opt/glassfish3/glassfish/domains/domain1/generated/jsp/!
I restart my glassfish server once a day. During that restart for every glasshfish application that is running on that server, loader_directories are created! I once knew why this folders are created(but now I dont)1 Anyway, this folder are not big but they do contain many files! It is save to delete them with no hesitation! Once I delete them, I check number of inodes!

server# df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/rootvg-rootlv
                      612000   55750  556250   10% /
tmpfs                 490591       1  490590    1% /dev/shm
/dev/sda1              25688      39   25649    1% /boot
/dev/mapper/rootvg-optlv
                      897600   18998  878602    3% /opt
/dev/mapper/rootvg-varlv
                      321280    2267  319013    1% /var





I restart glassfish server

server]# /etc/init.d/GlassFish_domain1 restart

and there were no more complaines about glassfish application!

Now, every few months I deleted there folders!





 






Wednesday 25 June 2014

warning: /etc/rc.d/rc.sysinit saved as /etc/rc.d/rc.sysinit.rpmsave,warning: /etc/sysctl.conf created as /etc/sysctl.conf.rpmnew - EXPLAINED!

In case that you doing update/upgrade of some rpm packages, during this operation you may get warnings that new config files is generated or that used configuration has been moved to another name!

For instance, if you are updating rpm packages initscripts,  you will have this warning messages!


server#yum update initscripts
.
.
  Updating       : initscripts                                               1/1
warning: /etc/rc.d/rc.sysinit saved as /etc/rc.d/rc.sysinit.rpmsave
warning: /etc/sysctl.conf created as /etc/sysctl.conf.rpmnew

.
.

What does this means? As you all know, every service has configuration. Depending on how "big" update is, there might be smaller or bigger changes in configuration as well(different approach, different settings,etc.) or they just want clean configuration! If this is the case then new configuration is stored in *.rpmnew file or currently used configuration is moved in *.rpmsave file and new configuration is stored in configuration file! Will .rpmsave or .rpmnew file be created depends of service that is being updated.

.rpmsave file is created if service that is using that configuration need reboot to be applied!
.rpmnew file is created if service that is using that configuration need just need restart of service!

In my example, /etc/rc.d/rc.sysinit file is moved to /etc/rc.d/rc.sysinit.rpmsave because this changes can be only applied during server reboot.
 /etc/sysctl.conf file is moved to /etc/sysctl.conf.rpmnew because this changes can be only applied by issuing sysctl command and will be affected immediately!

Now, comes tricky part! If you reboot your servers, your new rc.sysinit configuration will be applied and this may result that some of your service do not start because they are now not in that config file. Of course, usually there are no service start in rc.sysinit file but you never know especially if you inherit server from another admin. So, if these new config files are created that you should look for difference between them.

Simple solutions are best so after config files are moved to .rpmsave, see difference between them, and just move .rpmsave to original file!

server#mv /etc/rc.d/rc.sysinit.rpmsave /etc/rc.d/rc.sysinit

and you will use same configuration that you use after reboot!

When you know all this, it can save you a lot's of troubleshooting time!

.rpmnew do not make any trouble because because service is still using original config file!





Tuesday 24 June 2014

pam_tally2(sshd:auth): user user1(1001) tally 15, deny 3 SOLVED nad EXPLAINED!

pam_tally2(sshd:auth): user user1(1001) tally 15, deny 3
In case that your are seing this message in /var/log/secure log, this means that someone 15 times tried to log in to your system with user user1! Also, user may complain that he can not connect as user1! Good thing is that in next line in secure log you have IP address of computer that tried to log in!

pam_tally2(sshd:auth): user user1 (1001) tally 15, deny 3
Jun 24 13:09:08 server1 sshd[111184]: Failed password for user1 from 192.168.0.25 port 10180 ssh2

Ok, explanation!
You are using pam.d. This is security feature for access to your system. With it you can control access to system services (like sshd) or commands(like passwd). Setting for this are located in /etc/pam.d/.

Ok, so troubleshooting!
Check you /var/log/secure.

server#tail /var/log/secure 
.
.
 Jun 24 13:09:08 server1 sshd[111184]pam_tally2(sshd:auth): user user1 (1001) tally 18, deny 3
Jun 24 13:09:08 server1 sshd[111184]: Failed password for user1 from 192.168.0.25 port 10180 ssh2


From here, we can see that pam.d module pam_tally2.so is responsible for user lockout! But from here we can also see that deny limit is 3 times and that is has been tried for 18 times to log in to system as user user1.
 Read pam.d configuration for sshd!

server#cat /etc/pam.d/sshd
#%PAM-1.0
auth       include      system-auth
auth       required     pam_tally2.so deny=3 onerr=fail lock_time=60
account    required     pam_nologin.so
account    include      system-auth
account    required     pam_tally.so
password   include      system-auth
session    optional     pam_keyinit.so force revoke
session    include      system-auth
session    required     pam_loginuid.so
session    required     pam_limits.so

From here we can see settings for failed password entry! pam.d is using module pam_tally2.so, after one failed login you have to wail or 60 seconds to try again and after 3 failed login, user account will be lock!

Issue following command
 

server# pam_tally2Login           Failures Latest failure          From
user1              18        06/24/14 14:11:58  192.168.0.25






From here we can see how many failures for user1 happened and when last try has happened!





SSH access for user user1 is locked and you want to unlock it.




Command for that is this

server#pam_tally2 -r -u user

 In are case that is

server#pam_tally2 -r -u user1
Now, when you issue pam_tally2 command there will be no failures shown and user1 will be able to log in onto system!

Important thing to know here is that once user account is locked, these is no use of trying to log onto system! Depending on your pam.d configuration,  your user account can or can not reset number of false tries! Pam.d configurations is complex and if you do know what you are doing, you can make your live much,much harder!