How To Configure Yum and Repositories

How To Configuring Yum and Yum Repositories

The configuration file for yum and related utilities is located at /etc/yum.conf. This file contains one mandatory [main] section, which allows you to set Yum options that have global effect, and may also contain one or more [repository] sections, which allow you to set repository-specific options. However, it is recommended to define individual repositories in new or existing .repo files in the /etc/yum.repos.d/directory. The values you define in the [main] section of the /etc/yum.conf file may override values set in individual [repository] sections.

This section shows you how to:

set global Yum options by editing the [main] section of the /etc/yum.conf configuration file;

set options for individual repositories by editing the [repository] sections in /etc/yum.conf and .repo files in the /etc/yum.repos.d/ directory;

use Yum variables in /etc/yum.conf and files in the /etc/yum.repos.d/ directory so that dynamic version and architecture values are handled correctly;

add, enable, and disable Yum repositories on the command line; and,

set up your own custom Yum repository.

5.3.1. Setting [main] Options

The /etc/yum.conf configuration file contains exactly one [main] section, and while some of the key-value pairs in this section affect how yum operates, others affect how Yum treats repositories. You can add many additional options under the [main] section heading in /etc/yum.conf.

A sample /etc/yum.conf configuration file can look like this:


[comments abridged]

# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum.repos.d

The following are the most commonly-used options in the [main] section:


…where value is one of:

0 — yum should prompt for confirmation of critical actions it performs. This is the default.

1 — Do not prompt for confirmation of critical yum actions. If assumeyes=1 is set, yum behaves in the same way that the command line option -y does.

…where directory is an absolute path to the directory where Yum should store its cache and database files. By default, Yum’s cache directory is /var/cache/yum/$basearch/$releasever.


…where value is an integer between 1 and 10. Setting a higher debuglevel value causes yum to display more detailed debugging output. debuglevel=0 disables debugging output, while debuglevel=2 is the default.


…where value is one of:

0 — Do not take into account the exact architecture when updating packages.

1 — Consider the exact architecture when updating packages. With this setting, yum will not install an i686 package to update an i386 package already installed on the system. This is the default.
exclude=package_name [more_package_names]

This option allows you to exclude packages by keyword during installation/updates. Listing multiple packages for exclusion can be accomplished by quoting a space-delimited list of packages. Shell globs using wildcards (for example, * and ?) are allowed.


…where value is one of:

0 — Disable GPG signature-checking on packages in all repositories, including local package installation.

1 — Enable GPG signature-checking on all packages in all repositories, including local package installation. gpgcheck=1 is the default, and thus all packages’ signatures are checked.

If this option is set in the [main] section of the /etc/yum.conf file, it sets the GPG-checking rule for all repositories. However, you can also set gpgcheck=value for individual repositories instead; that is, you can enable GPG-checking on one repository while disabling it on another. Setting gpgcheck=value for an individual repository in its corresponding .repo file overrides the default if it is present in /etc/yum.conf.


…where value is one of:

0 — yum should not check the dependencies of each package when removing a package group. With this setting, yum removes all packages in a package group, regardless of whether those packages are required by other packages or groups. groupremove_leaf_only=0 is the default.

1 — yum should check the dependencies of each package when removing a package group, and remove only those packages which are not required by any other package or group.

For more information on removing packages, refer to Intelligent package group removal.

installonlypkgs=space separated list of packages

Here you can provide a space-separated list of packages which yum can install, but will never update. Refer to the yum.conf(5) manual page for the list of packages which are install-only by default.

If you add the installonlypkgs directive to /etc/yum.conf, you should ensure that you list all of the packages that should be install-only, including any of those listed under the installonlypkgs section of yum.conf(5). In particular, kernel packages should always be listed in installonlypkgs (as they are by default), and installonly_limit should always be set to a value greater than 2 so that a backup kernel is always available in case the default one fails to boot.


…where value is an integer representing the maximum number of versions that can be installed simultaneously for any single package listed in the installonlypkgs directive.

The defaults for the installonlypkgs directive include several different kernel packages, so be aware that changing the value of installonly_limit will also affect the maximum number of installed versions of any single kernel package. The default value listed in /etc/yum.conf is installonly_limit=3, and it is not recommended to decrease this value, particularly below 2.

I think this is because the previous kernel version will be deleted when you install a third version, breaking the oldest kernel version… or it won’t install a new one.


…where value is one of:

0 — Do not retain the cache of headers and packages after a successful installation. This is the default.

1 — Retain the cache after a successful installation.


…where file_name is an absolute path to the file in which yum should write its logging output. By default, yum logs to /var/log/yum.log.


…where value is one of:

best — install the best-choice architecture for this system. For example, setting multilib_policy=best on an AMD64 system causes yum to install 64-bit versions of all packages.

all — always install every possible architecture for every package. For example, with multilib_policy set to all on an AMD64 system, yum would install both the i586 and AMD64 versions of a package, if both were available.


…where value is one of:

0 — Disable yum’s obsoletes processing logic when performing updates.

1 — Enable yum’s obsoletes processing logic when performing updates. When one package declares in its spec file that it obsoletes another package, the latter package will be replaced by the former package when the former package is installed. Obsoletes are declared, for example, when a package is renamed. obsoletes=1 the default.


…where value is one of:

0 — Disable all Yum plug-ins globally.
Disabling all plug-ins is not advised

Disabling all plug-ins is not advised because certain plug-ins provide important Yum services. In particular, rhnplugin provides support for RHN Classic, and product-id and subscription-manager plug-ins provide support for the certificate-based Content Delivery Network (CDN). Disabling plug-ins globally is provided as a convenience option, and is generally only recommended when diagnosing a potential problem with Yum.

1 — Enable all Yum plug-ins globally. With plugins=1, you can still disable a specific Yum plug-in by setting enabled=0 in that plug-in’s configuration file.


…where directory is an absolute path to the directory where .repo files are located. All .repo files contain repository information (similar to the [repository] sections of /etc/yum.conf). yum collects all repository information from .repo files and the [repository] section of the /etc/yum.conf file to create a master list of repositories to use for transactions. If reposdir is not set, yum uses the default directory /etc/yum.repos.d/.


…where value is an integer 0 or greater. This value sets the number of times yum should attempt to retrieve a file before returning an error. Setting this to 0 makes yum retry forever. The default value is 10.

Stadium Zoom

Quick flipbook animation I did on


Took about 5 minutes. I reckon I’ll have another go at this in future.

Useful Scrapebox Footprints and other SEO tips

Hope this can help some noobs out there, and it’s always worth reminding yourself of the basics once in a while too!


The following will help you find directories related to your niche.
intitle:add+url “your keyword”
intitle:submit+site “your keyword”
intitle:submit+url “your keyword”
intitle:add+your+site “your keyword”
intitle:add+site “your keyword”
intitle:directory “your keyword”
intitle:sites “your keyword”
intitle:list “your keyword”

These will help you find forums in your niche.
“powered by SMF” your keyword
“powered by IPB” your keyword
“powered by MyBB” your keyword
“powered by PunBB” your keyword
“Powered by Phbb” your keyword
“powered by vBulletin” your keyword

These will help you find blogs to comment on (including specific searches for .edu and .gov sites that often have a high PageRank and some believe, a higher authority than commercial sites). inurl:wp-login.php +blog inurl:wp-login.php +blog “your keyword” “your keyword” -”you must be logged in” -”comments are closed” “no comments” +blogroll -”posting closed” -”you must be logged in” -”comments are closed” “no comments” +blogroll -”posting closed” -”you must be logged in” -”comments are closed” “Powered by BlogEngine.NET” inurl:blog “post a comment” -”comments closed” -”you must be logged in” “YOUR KEYWORD” “Powered by BlogEngine.NET” inurl:blog “post a comment” -”comments closed” -”you must be logged in” “YOUR KEYWORD”

Lastly, this is a bit more advanced and I hope you find it useful… “YOUR KEYWORD” “add to this list” – this will help you to find Squidoo lenses where you are able to add/suggest your site as a link that is related to the lens. You must be logged into your Squidoo account before you attempt to add a link to these lenses.

How To Change WordPress Theme When You Can’t Login!

Got the white screen from your WordPress page?

Getting php errors and not able to log in?

Spent hours (that you don’t have) debugging those errors and getting nowhere?

Then read on…

I had an unknown error in my php code that I just couldn’t debug – something to do with an unexpected ‘<‘ in my custom-image.php file.

I spent an hour trying to debug the code, then decided it was probably an error in the functions file from which it was being called.

I think it’s a bug in the new twentytwelve WP theme when adding custom header files.

Anyway, I ended up switching theme using this method and I’m back online!

If your site gets broken and you can’t access the wp-admin page, there are a number of things you can do. Going to cpanel > file manager > wp-content and renaming the plugins folder can sometimes do the trick.

You can also rename the offending theme and WordPress should revert to the default them. However, if none of these work, you may need to make changes to the site’s database directly. If it comes to this, don’t worry, it’s pretty simple. Just follow the below instructions and you should be back in business.

Step 1:

From the blog url, go to, log in and find the phpMyAdmin link. It should be located under the Databases heading.

Where to find phpmyadmin

Step 2:

Select the WP db which stores your site’s data – hopefully you named this something useful when you set up your site, so it should be easy to work out which one you’re looking for

Step 4:

Click on the table named YOURPREFIX_options. If you didn’t change the table prefix when installing, the default is wp_options.

Change WordPress theme through PHPMyAdmin

Step 4:

Select page 2 from the tables footer:




Step 5:

Now find “template” and “stylesheet” fields and click on the edit icon. You can change then change them from the current theme to any other you have installed.

Change WP Theme and Stylesheet

Then Robert should indeed be your mother’s brother….


Any probs, please leave a comment.



Scrapebox Review

Scrapebox Review Introduction: What is Scrapebox and What Does it Do?

Scrapebox is a downloadable program that runs on your laptop or desktop. The price of Scrapebox (at the time of this writing) is $97, but it is available on some sites for just $57. Scrapebox performs many automated tasks including:
  • Allowing you to search for blogs which are related to a specific keyword.
  • Allowing you to create “phantom traffic” to web pages, thus making it appear that those pages are getting more traffic than they really are (more on this in a second).
  • Allowing you to ping the links which you’ve built and submit those links to RSS feeds, thus theoretically increasing the value of your backlink.
  • Allowing you to automate generic blog comments and create backlinks in much faster time using blog commenting.
Scrapebox performs all of these functions very quickly and with little need to learn how to use the program. For example, by typing in the keyword “how to play guitar” and hitting “start harvesting” you can gather thousands of blog URLs which are related to the topic of playing the guitar using search results from:
  • Google
  • Yahoo!
  • Bing
  • AOL Search
 On top of this, you can select to have the duplicate URLs removed from your list and also have the links to URLs which are either password protected or which return errors. Scrapebox also allows you to filter URLs according to page rank so that you can select only the highest ranking pages for building links to. Now, let’s talk about the features of Scrapebox and how useful they actually are for your SEO strategy.


Using Scrapebox’s URL Harvesting Feature


This is easily the best white hat SEO feature for Scrapebox and one which can save you a LOT of time. For example, you can harvest blog URLs, export them to an Excel spreadsheet and hand them over to an outsourcer who will create blog comments and help you to build qualified backlinks to your site. For best results, remove the duplicate URLs and broken URLs before you export your list and send it to your outsourcer for link building purposes.


It’s also a good idea to provide the following information in the spreadsheet along with the URLs to the blogs where you want to comments:


  • The specific URL which you want the link built to.
  • The email which you want to submit the blog comment under (you’re best off using a “dummy email” account for this, one which isn’t your primary email, because you’ll be getting a lot of emails from the blogs that you comment on).
  • The keyword you want to be used as the blog commenter’s “name” this will ensure that your keyword is used in anchor text of the link back to your site.
To ensure that your outsourcer is leaving comments which will have a shot at getting approved by the blog owner, you can also request them to copy and paste their comment into the spreadsheet next to the URL where they left the comment.


This way you can check the comments and make sure they’re not leaving the generic: “Nice blog, I liked reading it,” or “I like this topic, do you know where I can read more about it?” These types of comments usually get deleted by blog owners, meaning that you built the link for nothing.

Using the Scrapebox Pinging and RSS Feed Feature

This feature allows you to “backlink your backlinks” using by submitting them to RSS feeds which is believed to get them indexed faster by the search engines and also enhance the value of each link. RSS submission and pinging does have some value in improving the power of your backlinks, since sites with a high number of inbound links are obviously considered to be more valuable to the search engines. However, some SEO experts claim that if your site if less than three months old, it’s better to hold off on pinging your backlinks and submitting them to RSS feeds and letting the links get indexed naturally.

Scrapebox Review Conclusion


In conclusion, Scrapebox is a valuable tool for automating your SEO processes and saving you a lot of time and money. Just be sure that you follow the suggestions in this article. Use Scrapebox to harvest URLs for building backlinks and (as long as your site is more than three months old), use the RSS submission and pinging features to juice up your backlinks.


However, be careful not to be lured into the laziness trap and try to shortcut your SEO using proxy server traffic or generic blog comments which will end up deleted most of the time anyway. If you do this, Scrapebox can be a very valuable SEO tool.

What’s filling up root? | /dev/null 2>$1

I had a right noodle scratcher today when trying to find the source of the / filesystem filling up on an AIX 6.1 system.

I did all the normal things, like

du -kx | sort -n

find / -xdev -ls

Even went as far as some pretty complex find commands:
To find file names only:

find / ! -name / -prune ! -type l |grep -vwE $(mount|tail +3|awk ‘{if ( /^[a-zA-Z]/ ) {print $3} else {print $2}}’|grep -vE “^/.*/|^/$”|xargs|tr ‘ ‘ ‘|’)

To show all the detail:

find / ! -name / -prune ! -type l -ls|grep -vwE $(mount|tail +3|awk ‘{if ( /^[a-zA-Z]/ ) {print $3} else {print $2}}’|grep -vE “^/.*/|^/$”|xargs|tr ‘ ‘ ‘|’)

It is also useful when you try to check what is filling up root(/) filesystem:

du -sk $(find / ! -name / -prune ! -type l |grep -vwE $(mount|tail +3|awk ‘{if ( /^[a-zA-Z]/ ) {print $3} else {print $2}}’|grep -vE “^/.*/|^/$”|xargs|tr ‘ ‘ ‘|’))|sort -k1rn

Only when I went back to the beginning did I spot the issue:
find / -xdev -ls |grep “Nov” listed the files that had recently written to /. Only then did I notice a 168M file:
# find / -xdev -ls |grep “Nov 2”
11 8 drwxrwxr-x 5 root system 8192 Nov 27 12:23 /dev
4174 4 drwxrwx— 2 root system 4096 Nov 27 12:41 /dev/.SRC-unix
261 0 crw——- 1 root system 10, 0 Nov 27 12:23 /dev/__vg10
265 0 crw–w–w- 1 root system 6, 0 Nov 22 15:56 /dev/error
272 0 brw-rw—- 1 root system 10, 4 Nov 27 12:23 /dev/hd4
273 0 brw-rw—- 1 root system 10, 1 Nov 27 12:23 /dev/hd5
281 0 crw-rw-rw- 1 root system 2, 2 Nov 27 12:43 /dev/null
844 172764 -rw-r–r– 1 root system 176910336 Nov 22 15:55 /dev/null 2>&1

Turns out this is a bug with one of the IBM agents (cas agaent, I think).


You can just delete this file. There is a bug fix that will address the issue in the next ML. Until then, this file will re-appear, so a cron job might be a good measure.


Additional commands:

If /var is the issue

 # df -g /var
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/bos_hd9var      0.50      0.12   76%     8452    22% /var

# cd /var

find . -xdev -type f -name *core* -exec file {} \; |grep “AIX core file fulldump” | awk -F : ‘{print $1}’ | xargs rm

# du -smx * | sort -nr | head -5
436.13  was
302.21  opt
42.05   teamquest
18.94   log
17.38   adm

# du -smx was/* | sort -nr | head -5
418.46  profiles
13.07   jdbc
4.60    scripts
0.00    lost+found
0.00    heapdump

# du -smx opt/* | sort -nr | head -5
302.21  tivoli
0.00    freeware
0.00    csm

# find . -type f  -xdev | xargs ls -l | sort -rnk5,5 | head -10


How can I find WWPNs of FC adapters from the SMS menu?

How can I find WWPNs of FC adapters from the SMS menu?
It is possible to find the WWPNs in the OpenFirmware Prompt – at least on recent hardware.

From the HMC boot the LPAR into the Open Firmware Prompt and issue the ioinfo command at the ok-prompt:

1 = SMS Menu 5 = Default Boot List
8 = Open Firmware Prompt 6 = Stored Boot List

Memory Keyboard Network SCSI Speaker

0 > ioinfo

This tool gives you information about SCSI,IDE,SATA,SAS,and USB devices attached to the system

Select a tool from the following


q – quit/exit

==> 6

FCINFO Main Menu
Select a FC Node from the following list:
# Location Code Pathname
1. U5877.001.0082113-P1-C10-T1 /pci@80000002000012b/fibre-channel@0
2. U5877.001.0082113-P1-C10-T2 /pci@80000002000012b/fibre-channel@0,1
3. U5877.001.0082924-P1-C10-T1 /pci@80000002000013b/fibre-channel@0
4. U5877.001.0082924-P1-C10-T2 /pci@80000002000013b/fibre-channel@0,1

q – Quit/Exit

==> 1

FC Node Menu
FC Node String: /pci@80000002000012b/fibre-channel@0
FC Node WorldWidePortName: 10000000c9d08fd0
1. List Attached FC Devices
2. Select a FC Device
3. Enable/Disable FC Adapter Debug flags

q – Quit/Exit

How to Check for Missing Filesets in a Maintenenance Level

instfix -i | grep AIX_ML

instfix -ciqk 61-06-051115_SP| grep :-:

To list which software is below AIX(R) Version 6.1 technology
level 0, service pack 1, type:

oslevel -s -l 6100-00-01-0748



usage of the instfix command


Hints, Tips and usage of the ‘instfix’ command

This document will describe many of the various and most common uses of the ‘instfix’ command.

The main topics covered will include:

– TL verses ML – Which is correct?
– Usage of the ‘instfix’ command to check for APARs
– Usage of the ‘instfix’ command to install APARs
– Adding missing APAR information to the ‘fix’ object class of the ODM

TL verses ML – Which is correct?

Starting in 5.3 TL7 the terminology changed. What use to be called ML or Maintenance Level is now called TL or Technology Level. The format for the numbering of the filesets also changed at that time. A base level fileset for 5.3 ML6 would have been but starting in 5.3 TL7 the third number indicated the TL level. So a base level fileset for 5.3 TL7 would be “TL” and “ML” are technically the same thing and interchangeable but “TL” is generally what is used now, however the ‘instfix’ command wasn’t changed and still uses ML.

Usage of the ‘instfix’ command to check for APARs

To use the ‘instfix’ command to determine what TLs are currently installed on the system as well as the status of the install (i.e. whether they are completely installed or not) you can use the following command:

# instfix -i | grep ML

All filesets for 6100-00_AIX_ML were found.

All filesets for were found.

Not all filesets for 6100-01_AIX_ML were found.


If something is missing from a TL you can use ‘instfix’ to determine what is missing using the following command:
# instfix -icqk <ML LEVEL> | grep :-:

# instfix -icqk 6100-01_AIX_ML | grep :-:
6100-01_AIX_ML:X11.adt.imake: 6100-01 Update
6100-01_AIX_ML:X11.samples.apps.clients: 6100-01 Update
6100-01_AIX_ML:X11.samples.lib.Core: 6100-01 Update

You can also use the ‘instfix’ command to check what Service Packs are installed on a system and check their status. For that you would use the following command:

# instfix -i | grep _SP
All filesets for 61-00-010748_SP were found.
All filesets for 61-00-020750_SP were found.
All filesets for 61-00-030808_SP were found.
All filesets for 61-00-040815_SP were found.
All filesets for 61-01-010823_SP were found.

If you just want to check to see if a particular APAR is installed you can use the following:

# instfix -ik <fix>
# instfix -ik IZ04606
All filesets for IZ04606 were found.

If you want to find out more information about a particular APAR you can use the following:
# instfix -aik <fix>
# instfix -aik IZ04606
IZ04606 Abstract: pwdadm not working as intended for authuser

IZ04606 Symptom Text:
A user with a role with
authorization is unable to use the pwdadm command to set the
ADMIN flag for a user:
$ rolelist -ea
$ pwdadm -f ADMIN abc
3004-692 Error changing “flags” to “ADMIN” : You do not have
All filesets for IZ04606 were found.

If you want to see a list of fixes that are on a CD or in a directory you can use the following:
# instfix –Td /dev/cd0
# instfix -Td <directory path>
Here’s a sample of what the output will look like:
IZ50383 Hang in mkuser command
IZ50386 System may crash in iodone+000044 after failed health check
IZ50388 Crash when unconfiguring path to open MPIO Disk.
IZ50482 DELAYED_INTS error log entry for 10-Gigabit Ethernet Adapter
IZ50483 DSI at kxent_ras_callback
IZ50591 Fixdata for new service pack

You may want to know if any of the APARs you have in a directory contain any fixes for say multibos. You can check that with the following:
# instfix -Td . | grep -i multibos
IY78256 multibos bootlist support in diag.

You may have a need to create a list of APARs that are included in a directory. That can easily be done with the following command:
# instfix -Td . | cut -f1 -d ” ” > /tmp/fix.list
# cat /tmp/fix.list | pg

If you want to see a list of what filesets are included with an APAR you can get that with the following command:
# instfix -ivk <fix>
# instfix –ivk IZ50591
IZ50591 Abstract: Fixdata for new service pack

Fileset bos.rte.install: is applied on the system.
All filesets for IZ50591 were found.

Then if you needed to know the date it was installed
# lslpp -h <one of the filesets from above>
# lslpp -h bos.rte.install
Fileset Level Action Status Date Time
Path: /usr/lib/objrepos
bos.rte.install COMMIT COMPLETE 07/12/07 14:55:01 COMMIT COMPLETE 07/12/07 14:55:29 COMMIT COMPLETE 01/05/08 21:12:55 COMMIT COMPLETE 07/02/08 15:05:54 COMMIT COMPLETE 12/07/08 19:45:09 COMMIT COMPLETE 05/16/09 16:59:59

If you have a particular fileset level installed on your system and you want to determine what APAR’s are associated with it:
instfix –aiv | grep –p <fileset>:<level>
For example:
# instfix -aiv | grep -p devices.pciex.b3154a63.rte:
Fileset devices.pciex.b3154a63.rte: is applied on the system.
All filesets for IZ48863 were found.
IZ50114 Abstract: IB Applications using IbBaseLib may hang on close

Fileset devices.chrp.IBM.lhca.rte: is not applied on the system.
Fileset devices.common.IBM.ib.rte: is not applied on the system.
Fileset devices.pci.b315445a.rte: is not applied on the system.
Fileset devices.pciex.b3154a63.rte: is applied on the system.
Not all filesets for IZ50114 were found.

So devices.pciex.b3154a63.rte: is the only fileset for IZ48863 and one of four filesets for IZ50114

If you prefer to use smitty to check for fixes the fastpath is
# smitty show_apar_stat

Usage of the ‘instfix’ command to install APARs

Installing an APAR using the ‘instfix’ command is fairly straight forward
To install a fix from cd0
# instfix -k <fix> -d /dev/cd0

To install a fix from a directory
# instfix -k <fix> -d <directory>
# instfix -k IZ36737 -d .
Pre-installation Verification…
Verifying selections…done
Verifying requisites…done

From there the installation will continue.

If you prefer to use smitty to install a fix the fastpath is:
# smitty update_by_fix

Adding missing APAR information to the ‘fix’ object class of the ODM

If an APAR doesn’t show up with instfix –ik <APAR number> but it is installed you can check to see if it’s in the ODM
# ODMDIR=/usr/lib/objrepos odmget fix | grep -p <APAR number>

If an APAR doesn’t show up with the instfix -ik command it may be an efix.
Check for ifixes / efixes with the emgr command
# emgr -l
or to get more info
# emgr -lv3

If a TL and/or SP doesn’t show up with oslevel –rq or –sq but you know the level is on the system the ODM is missing the fix data.

Note: You will be modifying the ODM on the system that is missing the fix data. If you are unfamiliar with that you may want to call the support center for assistance.

To get it in the ODM you can copy it from another system using the following procedure:

On a good system at the same level

# ODMDIR=/usr/lib/objrepos odmget -q name=<ML, SP or APAR > fix > /tmp/fix.backup

Here’s an APAR example
# ODMDIR=/usr/lib/objrepos odmget -q name=IZ11011 fix > /tmp/fix.backup
# cat /tmp/fix.backup

name = “IZ11011”
abstract = “Install commit only operation fails”
type = “f”
filesets = “bos.rte.install:\n\

symptom = ” Not able to do a commit only operation on a fileset.\n\
Fileset is left in the apply state.\n\

And here’s an example of a Technology Level
# ODMDIR=/usr/lib/objrepos odmget -q name=5300-10_AIX_ML fix > /tmp/fix.backup

And this is an example of a Service Pack
# ODMDIR=/usr/lib/objrepos odmget -q name=53-10-010921_SP fix > /tmp/fix.backup

Note: if you aren’t sure of the format to use above for the ML or SP level you can run the following command to get a list of ML’s
# ODMDIR=/usr/lib/objrepos odmget fix | grep _ML | pg

or the following to get a list of SP’s
# ODMDIR=/usr/lib/objrepos odmget fix | grep _SP | pg

ftp /tmp/fix.backup to the system not seeing the fix level or APAR

Then on the system you ftp’d it to: backup the ODM
Do this from the / (root) directory
# tar -cvf /tmp/odm.tar ./etc/objrepos ./usr/lib/objrepos
Note: Check the space in /tmp first and increase if necessary

Follow these steps to add the fix data to the ODM:

# ODMDIR=/usr/lib/objrepos odmadd /tmp/fix.backup

# oslevel -rf
# oslevel -r

# oslevel -sf
# oslevel -s

>> The levels should now be correct or if you just added an APAR it should show up now with
# instfix –ik <APAR #>
# instfix -ik IZ11011
All filesets for IZ11011 were found.

Note: If after doing an odmadd with the fix data from another system the fix data still isn’t showing up and rootvg is mirrored you may need to do the following steps:
# synclvodm -Pv rootvg
# savebase
# bosboot -ad /dev/ipldevice

How to post an delayed email in Outlook

Just found this one the other day. Quality if you come in from the pub and want to email in sick at 3am, but need to make it look it you sent it in the morning!

Nice one to have up your sleeve!

How to gunzip and untar all at once!

This tip is a rather simple but useful one. It’s not a trick or anything fancy, but just something that I somehow didn’t know for a long time that I wish I would have. A common thing to do in Linux/Unix/Whateverix is to download a tarball archive that has been gzipped and then extract and untar it. So you might do somethig like this:

gunzip myfile.tar.gz
tar -xvf myfile.tar

The first command unzips it. The second command extracts the tar archive. What I didn’t know is that you can combine all of this into one simple command! Just do the following:

Most linux version support the following command:
tar -zxvf myfile.tar.gz

However on AIX you need to pipe the commands like this:
gunzip myfile.tar.gz | tar xvf –

of try:

gunzip < abc.tar.gz | tar xvf - That's all there is to it!