In the public cloud, a key to being secure is a solid understanding of the shared security model that exists between you (the customer) and your cloud provider. Without this, you may make assumptions that your cloud provider is protecting you, when in fact you are actually responsible for particular security functions.

Your cloud provider is responsible for securing the foundational services, such as computer power, storage, database and networking services, but you will be responsible for the configuration of those services. At the network layer, your service provider is responsible for network segmentation, perimeter services, some DDOS and spoofing.

But you are responsible for network threat detection, reporting and any incident reporting. At the host layer, you are responsible for access management, patch management configuration hardening, security monitoring and log analysis. The application security components of your site are 100% your responsibility. The model below shows a breakdown of responsibilities between you and your service provider:

Shared Responsibility Model
Shared Responsibility Model


Understanding your role and the role of your cloud provider will not only help you make the best decision concerning your cloud infrastructure, it will also ensure that once implemented your cybersecurity strategy will efficiently and cost-effectively protect your data from threats to the cloud.

Next we’ll look at how you can protect your online assests with 7 Best Practices for Cloud Security

Amazon’s Docker service is linking into Apache Mesos for simpler clustering

Amazon’s Docker service is linking into Apache Mesos for simpler clustering

Summary:With cluster management a persistently big issue, recently-launched Amazon container service ECS is aiming to show how it can integrate with Apache Mesos and Marathon.

Amazon’s Docker-centric container service is working on ways to link into Apache Mesos and the popular Marathon services scheduler framework to widen users’ cluster-management options.

Launched last November, Amazon EC2 Container Service, or Amazon ECS, has just unveiled an Apache Mesos scheduler driver as a proof-of-concept integration with Marathon.

The open-source driver, which sends Mesos management commands direct to ECS, is designed to show how Marathon could schedule workloads on ECS.

It is also aimed at demonstrating the core design principles behind the Amazon service, which separates scheduling logic from state management, according to Deepak Singh, who founded and leads ECS.

“This allows you to use the ECS schedulers, write your own schedulers, or integrate with third-party schedulers,” Singh said in a blogpost.

Cluster management is becoming an important issue for developers who are building distributed applications in the cloud.

“A common example of developers interacting with a cluster management system is when you run a MapReduce job via Apache Hadoop or Apache Spark,” Singh said.

“Both these systems typically manage a coordinated cluster of machines working together to perform a large task. In the case of Hadoop or Spark, these tasks are most often data-analysis jobs or machine learning.”

Last week version 0.8.0 of Marathon was released. Mesosphere, a major contributor to the Mesos open-source project, describes it as the most popular framework on Mesos and as being used in large-scale production at a number major companies worldwide.

Singh said cluster management systems face two challenges. The first is the complexity of managing the state of the cluster.

“Software like Hadoop and Spark typically has a Leader, or a part of the software that runs in one place and is in charge of coordination. They’ll then have many, often hundreds or even thousands of Followers, or a part of the software that receives commands from the Leader, executes them, and reports state of their sub-task,” he said.

“When machines fail, the Leader must detect these failures, replace machines, and restart the Followers that receive commands. This can be a significant portion of code written for applications which need access to a large pool of resources.”

The second challenge for cluster management systems is that each application typically assumes full ownership of the machine where its tasks are running.

“You will often end up with multiple clusters of machines, each dedicated fully to the management system in use. This can lead to inefficient distribution of resources, and jobs taking longer to run than if a shared pool of resources could be used,” Singh said.

In the GitHub repository for the Marathon driver, Amazon points out that the software is for demonstration purposes and is “not recommended for production use”.

The company goes on to say: “We are working with the Mesos community to develop a more robust integration between Apache Mesos and Amazon ECS.”

More on Docker and containers

Forrester’s 2015 cloud predictions: Docker rises, storage pricing war claims lives

Forrester’s 2015 cloud predictions: Docker rises, storage pricing war claims lives

Summary:The market analysis company lays out what it sees as the top 10 major cloud developments that will shape the business landscape over the next year.


Cloud computing is a disruptive technology, and resistance to its power is futile.

This is the premise surrounding the latest set of 2015 predictions from Forrester Research, in which the market analysis company lays out what it sees as the top 10 major cloud developments that will shape the business landscape over the next year.

“The landscape for cloud computing changes quickly, so your business technology agenda must adapt just as rapidly,” the report states. “Your business will earn an early mover advantage by keeping ahead of these changes.”

Nadella’s cloud-first strategy, Microsoft could be set to generate more of its revenue from its cloud services than its traditional on-premises applications, Forrester says. For businesses, this means there’s an opportunity to have the upper hand when it comes time to negotiate contracts, as sales teams will want to push as much cloud as possible into each enterprise license agreement.

Back-office applications will need RESTful interfaces.Developers tasked with linking together apps via APIs are going to be on the lookout for services that communicate via REST interfaces, Forrester says. But rather than waiting for REST APIs via an upgrade, companies will look to replace their enterprise service with an API management solution.

Cloud data breaches are a sure thing. Forrester doesn’t mince words with this one, saying that CIOs should expect to encounter a breach in the cloud – and that it will be their fault, not the SaaS provider. “The culprits will likely be common process and governance failures such as poor key management or lack of training or perimeter-based thinking by your security department,” the report states. “A breach of some form is inevitable.”

Docker containers will cement their place. Companies ranging from Google to eBay have jumped on the Docker bandwagon, and Forrester recommends that others follow suit. “Docker is not a fad. It marks a new approach that delivers real benefits, and it is here to stay.”

Hybrid cloud management will finally mature. Forrester says that in 2015, enterprises will start to figure out how to use the tools that are available to expose private cloud resources to their developers, so long as they stop creating artificial boundaries between private and public clouds and their management tools.

Managed private clouds will face a death spiral. The on-premise, remotely managed private cloud is a doomed model, Forrester says. Not only does it offer enterprises no lasting value, it poses far more challenges than potential benefits. Forrester sees managed private clouds dwindling significantly over the next year.

Industry-specific SaaS will surge. For SaaS vendors, the coming year will be ripe with vertical expansions. The reason? To better appeal to enterprise customers, Forrester says. Expect to see Workday break out from education and government, and for Salesforce.com to throw their hat in, too.

SaaS vendors tiptoe toward hybrid. Forrester expects to see SaaS vendors that focus on public-only multi-tenant deployments to begin offering a more hybrid model that includes some on-premise implementations.

Cloud storage pricing wars will claim lives. Basic online backup is not a sustainable businesses when fronted on its own, Forrester says. The companies that realize this the fastest will obviously have a better shot at avoiding casualties. “In 2015, enterprise online backup providers must either make the leap to disaster-recovery-as-a-service (DRaaS) and provide workload availability in addition to data protection or prepare to suffer a similar fate to Symantec Backup Exec cloud.”

ChefDK Setup and Install on Mac

Download the Chef-DK package…

Go to: http://downloads.getchef.com/chef-dk/

Install the package…

Once its installed check it and make sure the install was successful with the following  command:

$ chef verify

– Set System Ruby

$ which ruby

You might see something like this: ~/.rvm/rubies/ruby-2.1.1/bin/ruby

If you want to use the version of ruby that came with ChefDK do the following…assuming you are using BASH…

$ echo ‘eval “$(chef shell-init bash)”‘ >> ~/.bash_profile

$ . ~/.bash_profile

$ which ruby

Install Git if you don’t already have it…

Setting up the chef-repo

You can do this two ways….download the starter kit from your Chef server OR manually. In this case we will do this manually because I already happen to have a hosted Chef account and will copy my keys over from another location. So…go to your designated chef directory and type:

$ git clone git://github.com/opscode/chef-repo.git

Then go to /Path/to/chef-repo/ and do:

mkdir .chef

Three files will need to be placed in this directory:

– knife.rb

– ORGANIZATION-validator.pem

– USER.pem

This directory will house your private keys and personal data. In order to not to commit your .chef directory to your git repository, add this directory to .gitignore as follows:

$ echo ‘.chef’ >> Path/to/chef-repo/.gitignore

Now you need to get the 3 files that go into your .chef directory. Either copy from another location or regenerate these files.

If you need to regenerate these files, follow the instructions below:

Log onto your Chef server. For me this is located at: https://manage.opscode.com

Once logged in click ADMINISTRATION at the top then the name of your organization.

Knife.rb – Click “Generate Knife Config” and download the file. Place it in your .chef directory

ORGANIZATION-validator.pem – can be downloaded by clicking “Reset Validation Key” in the Administration page.

USER.pem – can be downloaded by clicking Users on the left hand side and then choosing your username, and finally clicking “Reset Key“

Now test your chef setup:

$ cd /Path/to/chef-repo

$ knife client list

This will display any chef clients you currently have.

$ knife client list


Here we see only the security validator which will be responsible for managing future servers which we add to our organization.

Getopts Tutorial

−Table of Contents

Small getopts tutorial

When you want to parse commandline arguments in a professional way, getopts is the tool of choice. Unlike its older brother getopt (note the missing s!), it’s a shell builtin command. The advantage is

you don’t need to hand your positional parameters through to an external program
getopts can easily set shell variables you can use for parsing (impossible for an external process!)
you don’t have to argue with several getopt implementations which had buggy concepts in the past (whitespaces, …)
getopts is defined in POSIX®
Some other methods to parse positional parameters (without getopt(s)) are described in: How to handle positional parameters.

Note that getopts is not able to parse GNU-style long options (–myoption) or XF86-style long options (-myoption)!


It’s useful to know what we’re talking about here, so let’s see… Consider the following commandline:

mybackup -x -f /etc/mybackup.conf -r ./foo.txt ./bar.txt
All these are positional parameters, but you can divide them into some logical groups:
-x is an option, a flag, a switch: one character, indroduced by a dash (-)
-f is also an option, but this option has an additional argument (argument to the option -f): /etc/mybackup.conf. This argument is usually separated from its option (by a whitespace or any other splitting character) but that’s not a must, -f/etc/mybackup.conf is valid.
-r depends on the configuration. In this example, -r doesn’t take arguments, so it’s a standalone option, like -x
./foo.txt and ./bar.txt are remaining arguments without any option related. These often are used as mass-arguments (like for example the filenames you specify for cp(1)) or for arguments that don’t need an option to be recognized because of the intended behaviour of the program (like the filename argument you give your text-editor to open and display – why would one need an extra switch for that?). POSIX® calls them operands.
To give you an idea about why getopts is useful: The above commandline could also read like…

mybackup -xrf /etc/mybackup.conf ./foo.txt ./bar.txt
…which is very hard to parse by own code. getopts recognized all the common option formats.
The option flags can be upper- and lowercase characters, and of course digits. It may recognize other characters, but that’s not recommended (usability and maybe problems with special characters).

How it works

In general you need to call getopts several times. Each time it will use “the next” positional parameter (and a possible argument), if parsable, and provide it to you. getopts will not change the positional parameter set — if you want to shift it, you have to do it manually after processing:

shift $((OPTIND-1))
# now do something with $@
Since getopts will set an exit status of FALSE when there’s nothing left to parse, it’s easy to use it in a while-loop:

while getopts …; do

getopts will parse options and their possible arguments. It will stop parsing on the first non-option argument (a string that doesn’t begin with a hyphen (-) that isn’t an argument for any option infront of it). It will also stop parsing when it sees the — (double-hyphen), which means end of options.

Used variables

variable description
OPTIND Holds the index to the next argument to be processed. This is how getopts “remembers” its own status between invocations. Also usefull to shift the positional parameters after processing with getopts. OPTIND is initially set to 1, and needs to be re-set to 1 if you want to parse anything again with getopts
OPTARG This variable is set to any argument for an option found by getopts. It also contains the option flag of an unknown option.
OPTERR (Values 0 or 1) Indicates if Bash should display error messages generated by the getopts builtin. The value is initialized to 1 on every shell startup – so be sure to always set it to 0 if you don’t want to see annoying messages!
getopts also uses these variables for error reporting (they’re set to value-combinations which arent possible in normal operation).

Specify what you want

The base-syntax for getopts is:

OPTSTRING tells getopts which options to expect and where to expect arguments (see below)
VARNAME tells getopts which shell-variable to use for option reporting
ARGS tells getopts to parse these optional words instead of the positional parameters
The option-string

The option-string tells getopts which options to expect and which of them must have an argument. The syntax is very simple — every option character is simply named as is, this example-string would tell getopts to look for -f, -A and -x:

getopts fAx VARNAME
When you want getopts to expect an argument for an option, just place a : (colon) after the proper option flag. If you want -A to expect an argument (i.e. to become -A SOMETHING) just do:

getopts fA:x VARNAME
If the very first character of the option-string is a : (colon), which normally would be nonsense because there’s no option letter preceeding it, getopts switches to the mode “silent error reporting”. In productive scripts, this is usually what you want (handle errors yourself and don’t get disturbed by annoying messages).

Custom arguments to parse

The getopts utility parses the positional parameters of the current shell or function by default (which means it parses “$@”).

You can give your own set of arguments to the utility to parse. Whenever additional arguments are given after the VARNAME parameter, getopts doesn’t try to parse the positional parameters, but these given words.

This way, you are able to parse any option set you like, here for example from an array:

while getopts :f:h opt “${MY_OWN_SET[@]}”; do

A call to getopts without these additional arguments is equivalent to explicitly calling it with “$@”:

getopts … “$@”
Error Reporting

Regarding error-reporting, there are two modes getopts can run in:

verbose mode
silent mode
For productive scripts I recommend to use the silent mode, since everything looks more professional, when you don’t see annoying standard messages. Also it’s easier to handle, since the failure cases are indicated in an easier way.

Verbose Mode

invalid option VARNAME is set to ? (quersion-mark) and OPTARG is unset
required argument not found VARNAME is set to ? (quersion-mark), OPTARG is unset and an error message is printed
Silent Mode

invalid option VARNAME is set to ? (question-mark) and OPTARG is set to the (invalid) option character
required argument not found VARNAME is set to : (colon) and OPTARG contains the option-character in question
Using it

A first example

Enough said – action!

Let’s play with a very simple case: Only one option (-a) expected, without any arguments. Also we disable the verbose error handling by preceeding the whole option string with a colon (:):


while getopts “:a” opt; do
case $opt in
echo “-a was triggered!” >&2
echo “Invalid option: -$OPTARG” >&2
I put that into a file named go_test.sh, which is the name you’ll see below in the examples.
Let’s do some tests:

Calling it without any arguments

$ ./go_test.sh
Nothing happened? Right. getopts didn’t see any valid or invalid options (letters preceeded by a dash), so it wasn’t triggered.
Calling it with non-option arguments

$ ./go_test.sh /etc/passwd
Again — nothing happened. The very same case: getopts didn’t see any valid or invalid options (letters preceeded by a dash), so it wasn’t triggered.
The arguments given to your script are of course accessible as $1 – ${N}.

Calling it with option-arguments

Now let’s trigger getopts: Provide options.

First, an invalid one:

$ ./go_test.sh -b
Invalid option: -b
As expected, getopts didn’t accept this option and acted like told above: It placed ? into $opt and the invalid option character (b) into $OPTARG. With our case statement, we were able to detect this.
Now, a valid one (-a):

$ ./go_test.sh -a
-a was triggered!
You see, the detection works perfectly. The a was put into the variable $opt for our case statement.
Of course it’s possible to mix valid and invalid options when calling:

$ ./go_test.sh -a -x -b -c
-a was triggered!
Invalid option: -x
Invalid option: -b
Invalid option: -c
Finally, it’s of course possible, to give our option multiple times:

$ ./go_test.sh -a -a -a -a
-a was triggered!
-a was triggered!
-a was triggered!
-a was triggered!
The last examples lead us to some points you may consider:

invalid options don’t stop the processing: If you want to stop the script, you have to do it yourself (exit in the right place)
multiple identical options are possible: If you want to disallow these, you have to check manually (e.g. by setting a variable or so)
An option with argument

Let’s extend our example from above. Just a little bit:

-a now takes an argument
on an error, the parsing exits with exit 1

while getopts “:a:” opt; do
case $opt in
echo “-a was triggered, Parameter: $OPTARG” >&2
echo “Invalid option: -$OPTARG” >&2
exit 1
echo “Option -$OPTARG requires an argument.” >&2
exit 1
Let’s do the very same tests we did in the last example:

Calling it without any arguments

$ ./go_test.sh
As above, nothing happend. It wasn’t triggered.
Calling it with non-option arguments

$ ./go_test.sh /etc/passwd
The very same case: It wasn’t triggered.
Calling it with option-arguments

Invalid option:

$ ./go_test.sh -b
Invalid option: -b
As expected, as above, getopts didn’t accept this option and acted like programmed.
Valid option, but without the mandatory argument:

$ ./go_test.sh -a
Option -a requires an argument.
The option was okay, but there is an argument missing.
Let’s provide the argument:

$ ./go_test.sh -a /etc/passwd
-a was triggered, Parameter: /etc/passwd
See also


How do I get it so that with no arguments passed, it returns text saying “no arguments password, nothing triggered”?

I’d do it by checking $# before the while/getopts loop, if applicable:

if (($# == 0)); then

If you really need to check if getopts found something to process you could make up a variable for that check:


while getopts “:xyz” opt; do


if ((!options_found)); then
echo “no options found”

Another method of checking whether it found anything at all is to run a separate if statement right before the while getopts call.

if ( ! getopts “abc:deh” opt); then
echo “Usage: `basename $0` options (-ab) (-c value) (-d) (-e) -h for help”;

while etopts “abc:deh” opt; do
case $opt in
a) do something;;
b) do another;;
c) var=$OPTARG;;


try this trick. When you discover that OPTARG von -c is something beginning with a hyphen, then reset OPTIND and re-run getopts (continue the while loop).

The code is relatively small, but I hope you get the idea.

Oh, of course, this isn’t perfect and needs some more robustness. It’s just an example.


while getopts :abc: opt; do
case $opt in
echo “option a”
echo “option b”
echo “option c”

if [[ $OPTARG = -* ]]; then

echo “(c) argument $OPTARG”
echo “WTF!”
exit 1

Stuff I Can Never Remeber

To exit a vterm use ~.

On Cisco use Ctrl-Shift-6… I think

Adding disks to VIO server config:


lspv > /tmp/disk_config_pre_change
chmod 777 /tmp/disk_config_pre_change
inq >> /tmp/disk_config_pre_change

lsmap -all >> /tmp/disk_config_pre_change

chdev -l hdiskxx -a algorithm=round_robin -a reserve_policy=no_reserve

mkvdev –vdev -vadapter vhost -dev
mkvdev –vdev hdisk14 –vadapter vhost17 -dev lparname_rootvg_disk1

Adding vpaths to server without running cfgmgr for whole box:
cfgmgr -l fscsi0
cfgmgr -l fscsi1


Analysing dumps:
hfd-wm-cov-db-02:/var/adm/ras >sysdumpdev -L

Device name: /dev/lg_dumplv
Major device number: 10
Minor device number: 11
Size: 474104832 bytes
Uncompressed Size: 2415139298 bytes
Date/Time: Fri 3 Sep 16:51:22 2010
Dump status: 0
Type of dump: traditional
dump completed successfully

hfd-wm-cov-db-02:/ >kdb /dev/lg_dumplv
/dev/lg_dumplv mapped from @ 700000000000000 to @ 700000040000000
Preserving 1799410 bytes of symbol table [/unix]
The dump is compressed. Run the following command:
dd if=/dev/lg_dumplv bs=512 skip=1 count=925987 > dumpfile.BZ; dmpuncompress dumpfile.BZ
925987+0 records in.
925987+0 records out.
— replaced with dumpfile

kdb ./dumpfile /unix


Shows the system status and messages.
p (alias: proc) [*/slot/symb/eaddr]
Displays the process table.
u (alias: user) [-?][slot/symb/eaddr]
Displays u_area.
th (alias: thread) [*/slot/symb/eaddr/-w ?]
Displays the thread table.
mst [slot] [[-a] symb/eaddr]
Displays the mstsave area for the specified thread.
f (alias: stack) [+x/-x][th {slot/eaddr}]
Displays all stack frames for specified thread.
h (alias: ?) [topic]
Lists all subcommands.
Provides information about subcommands of kdb.
Displays error log messages.

Install Desktop Environment On CentOS 6.5 Minimal

After performing a base install of CentOS 6.5 using the minimal install CD, do the following to install a basic GNOME desktop environment:

# yum groupinstall "Desktop" "Desktop Platform" "X Window System" "Fonts"

Run the following on a particular package group in order to see detailed information including a description and which packages it will install.

# yum groupinfo groupname

There are additional package groups if you want something more than a basic desktop environment. For example,

# yum -y groupinstall "General Purpose Desktop"

To see a list of all the installed and available package groups:

# yum grouplist

Once installed, you can start GNOME by running:

$ startx


$ /sbin/telinit 5

To have CentOS boot into runlevel 5 “X11″ instead of runlevel 3 “Full multiuser mode”, modify the /etc/inittab file to change start up level from




My System Configuration

  • CentOS 6.5 x86 64-bit


Simplest Way To Rip A DVD To An ISO

dd if=/dev/dvd of=mynew.iso bs=2048

Now you can mount the iso as if it were an actual DVD or CD:

# mount -o ro,loop -t iso9660 mynew.iso /mnt/iso

This assumes that you have the mount point /mnt/iso already created.

Also if you were going to modify the image somehow you might want to leave off the ‘ro’ option.

If you want to make an ISO image of files or a directory that have long filenames then do something like this:

# mkisofs -o mynew.iso -J [files or directory]

Determine what device your CD/DVD burner is

The device is not always going to be /dev/cdrom or /dev/dvd or whatever. To determine what device you have do this:

# dmesg | grep ROM

The output looks something like this:

[    1.625249] ata1.00: ATAPI: TEAC DVD-ROM DV28EV, R.AB, max UDMA/33
[    1.657106] scsi 0:0:0:0: CD-ROM            TEAC     DVD-ROM DV28EV   R.AB PQ: 0 ANSI: 5
[    1.672009] Uniform CD-ROM driver Revision: 3.20
[    1.672165] sr 0:0:0:0: Attached scsi CD-ROM sr0
[ 8795.604839] scsi 4:0:0:0: CD-ROM            Memorex  MRX-650LE v1     9M62 PQ: 0 ANSI: 0
[ 8795.756728] sr 4:0:0:0: Attached scsi CD-ROM sr1

So I have a Teac CD-ROM at scsi id 0:0:0:0 as /dev/sr0
and a Memorex at scsi id 4:0:0:0 as /dev/sr1
So if I want to burn a DVD to the Memorex Burner I use /dev/sr1

Burning ISO images to DVD

Now if you want to create an actual DVD:

# growisofs -dvd-compat -Z /dev/dvd=mynew.iso


# growisofs -dvd-compat -Z /dev/[device]=mynew.iso

Or use whatever device the DVD writer shows up as.

Verify the burned DVD’s md5sum

To verify the burning to DVD:

# dd if=/dev/[device] | head -c `stat --format=%s mynew.iso` | md5sum

The resulting number you get should match the md5sum of the iso image.

Burning ISO images to CD

You need to determine the SCSI address of the CD burner. As root, issue the command:

# cdrecord --scanbus

In my case the CD Burner is on 2,0,0:

Cdrecord-Clone 2.01 (cpu-pc-linux-gnu) Copyright (C) 1995-2004 J�rg Schilling
Note: This version is an unofficial (modified) version with DVD support
Note: and therefore may have bugs that are not present in the original.
Note: Please send bug reports or support requests to http://bugzilla.redhat.com/bugzilla
Note: The author of cdrecord should not be bothered with problems in this version.
Linux sg driver version: 3.5.27
Using libscg version 'schily-0.8'.
cdrecord: Warning: using inofficial libscg transport code version (schily - Red Hat-scsi-linux-sg.c-1.83-RH '@(#)scsi-linux-sg.c      1.83 04/05/20 Copyright 1997 J. Schilling').
        2,0,0   200) 'TSSTcorp' 'DVD+-RW TS-L632H' 'D400' Removable CD-ROM
        2,1,0   201) *
        2,2,0   202) *
        2,3,0   203) *
        2,4,0   204) *
        2,5,0   205) *
        2,6,0   206) *
        2,7,0   207) *

Then to burn the CD do:

# cdrecord --dev=2,0,0 name.iso

To erase a re-writable CD before burning do:

# cdrecord --dev=2,0,0 --blank=fast

Static Routing in Red Hat

Adding a route in Red Hat

It’s a wee bit different from AIX, where you can just add the route with:

# route add so.ur.ce.ip de.st.ip.ad

Under Red Hat you need to define static routing using route command. The configuration is stored under /etc/sysconfig/network-scripts/route-eth0 for eth0 interface.

Update route using route command

Type the following command:
# route add -net netmask gw eth0
# route -n



Create static routing file

The drawback of abive ‘route’ command is that, when RHEL reboots it will forget static routes. So store them in configuration file:
echo ' via' >> /etc/sysconfig/network-scripts/route-eth0
Restart networking:
# service network restart
Verify new changes:
# route -n
# ping
# ping
# ping google.com
# traceroute google.com
# traceroute

Further readings:

  • man pages ip, route command

Convert Scalar Time to Local Time in Perl

If you a user’s last login time in a seemingly meaningless format, you can translate this into local time with the following perl one-liner:
lsuser -f user1 |grep time_last_login

perl -e ‘print scalar localtime(1392998673);’
Fri Feb 21 16:04:33 2014