Showing posts with label yum. Show all posts
Showing posts with label yum. Show all posts

30 October 2012

disable IPv6 DNS results

We had an end user appear in the main #centos IRC channel the other day with a IPv6 problem.  That person had leased a VPS somewhere, and their provider had included and enabled IPv6, at least partially.  Something was wrong in the network fabric, so that while some IPv6 services worked, others did not; DNS results returned with AAAA record results; but then the VM hoster was not transiting port 80 TCP traffic.  Very curious, and frustrating to the end user who just wanted yum to work so they could install updates and packages on their instance

The culprit is the grafting in of IPv6 readiness in man 2 getaddrinfo.  This is the way of the future, so there is no fighting it on a long term basis, but tactically having a means to be IPv4 only is appealing for people just wanting to work in the older address space.  The TL; DR uptake of that man page is that in a properly functioning system, name resolution answers under IPv6 are preferred, and only if not available, does one fall back to the older IPv4.  But this places a premium on IPv6 actually working when present.  We've shipped a full native IPv6 setup for customers at PMMan for a couple of years ago, but I assure you that we had some head-scratching as we rolled it out, and found customers using tunnels from HE or SixXs were also leaking advertisements to other VM's.  We added rules to filter out the false traffic after a bit of tracing with TCPDUMP

I have blogged about it before when IPv6 link-local address resolution (the ^FE family) was confusing distcc under Debian a couple of years ago.  There are links in the CentOS wiki for approaches on disabling IPv6 traffic, which vary between C5 and C6

That last mentioned article has an outlink to a bugzilla ticket that offers food for thought.  It mentions in passing that one can direct a nameserver to NOT deliver IPv6 results with a fairly simple tweak
Another option is to add this to /etc/sysconfig/named:
OPTIONS="-4"

... so, ... it should be possible to set up a local cacheing nameserver on the localhost, configured to NOT return IVv6 content, and so workaround the issue.  This smells sort of 'hackish', but it would have the benefit of being a single method that should work in the general case and not be tied to any particular kernel version, or other variable

07 September 2012

RPM and YUM trick: synchronize packages installed on two hosts

The question was asked in IRC today:

hello folks, is there any way to install packages from a list written by yum list installed? I've two CentOS 6.3 hosts and I like to get them with the same packages installed (also versions)

Here is a quick (and accurate) answer:

  1. Take and test backups to taste
  2. Run:
    rpm -qa --qf '%{name} \n' | sort > MANIFEST
    Note: that is a backslash n -- the html markup makes it hard to see the distinction
  3. Then copy that MANIFEST to the second unit and run:
    yum -y install `cat MANIFEST`
    Note: and here, backtick around the cat to get a sub-shell
  4. Finish by running:
    yum -y update
    on each unit

For extra credit, re-run the MANIFEST creator on each unit, and use diff to find any variances

08 October 2010

checklist RO rsync server

Setting up a new RO RSYNC server setup

The primary usage case is we describe is how to deploy a read-only RSYNC server with no end user accounts, to be used for distribution of content (here, to move a builder result archive that is intentionally NOT 'visible' from the internet to a more capable transfer server) From there, the content is integrated into a internal archiving server, and after that, to a publicly accessible binary archive, accessible through ftp, rpm, or yum

As before, we start with a freshly deployed, and hardened PMman instance. At all times, we will strive to follow proper sysadmin 'best practices' discipline under SElinux, wrappers and iptables

Install and enable rsync, which is the package holding the stock rsync daemon. As rsync supports wrappers, we also need the xinetd which is the package holding the stock inetd in recent Red Hat derived distributions -- Let's get started:

  1. yum can do the install trivially

    yum install rsync xinetd
  2. Then enable the needed services:

    /sbin/chkconfig rsync on
    /sbin/chkconfig xinetd on
  3. We need to do some configuration for the rsync daemon as to permissions and directories to serve:

    [root@trap64 etc]# cd /etc
    [root@trap64 etc]# cat rsyncd.conf
    # motd file = /etc/rsyncd.motd
    log file = /var/log/rsyncd.log
    pid file = /var/run/rsyncd.pid
    lock file = /var/run/rsync.lock

    [trap64]
    path = /var/ftp/pub/local
    comment = x86_64 fruit
    uid = nobody
    gid = nobody
    read only = yes
    list = yes
    # auth users = username
    # secrets file = /etc/rsyncd.scrt
    hosts allow = 10.0.0.0/24 127.0.0.0/24
    hosts deny = 0.0.0.0/0

    [root@trap64 etc]#
  4. Set up the iptables -- I do not recall the rsync daemon port off the top of my head, so I look it up:

    [root@trap64 etc]# grep rsync /etc/services | head -2
    rsync 873/tcp # rsync
    rsync 873/udp # rsync
    [root@trap64 etc]#

    ... so the port is 873

    # localhost can do all ...
    -A RH-Firewall-1-INPUT -i lo -j ACCEPT
    # ...
    # rsync daemon
    -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s 10.0.0.0/8 --dport 873 -j ACCEPT
    # ...
  5. Open the wrappers

    #
    ALL: ALL@127.0.0.1
    #
    # ...
    #
    rsync: ALL@10.0.0.0/255.0.0.0
    #
  6. Restart the wrappers enforcing daemon

    [root@trap64 sysconfig]# /sbin/service xinetd restart
  7. Test it:

    [root@trap64 sysconfig]# rsync localhost::
    trap64 x86_64 fruit
    [root@trap64 sysconfig]#
  8. To put it into production on a client, we can use something like this:

    #!/bin/sh
    #
    # this file: /root/bin/update-archive.sh
    # Copyright (c) 2010 R P Herrold
    # License: GPLv3+
    #
    # ln -s /root/bin/update-archive.sh /etc/cron.hourly/
    #
    export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
    umask 022
    # -- non local content goes into the mirror constellation
    [ ! -e /var/ftp/pub/mirror/pmman/RPMS/x86_64/ ] && \
    mkdir -p /var/ftp/pub/mirror/pmman/RPMS/x86_64/
    #
    # export VERBOSE="-v "
    export QUIET="-q "
    #
    /usr/bin/rsync -a ${VERBOSE} ${QUIET} --exclude=working \
    trap64.darkside.lan::trap64/pmman/RPMS/x86_64/. /var/ftp/pub/mirror/pmman/RPMS/x86_64/.
    chown -R root.root /var/ftp/pub/mirror/pmman/RPMS/x86_64
    #
  9. All done


Earlier in this series:

 Function  Link 
hardening http://www.pmman.com/usage/hardening/ 
 lftp  http://orcorc.blogspot.com/2010/08/mirroring-upstream-master-with-lftp-to.html 
 RO vsftpd  http://orcorc.blogspot.com/2010/07/checklist-ro-ftp-server-setup.html 
 RO NFS  http://orcorc.blogspot.com/2010/08/nfs-aide-to-memory.html 

22 September 2010

Change control in operations

This crossed the puppet-users mailing list earlier today:

We have an engineering environment of around 200 CentOS servers, plus a production environment of roughly the same size. Currently, when we roll out a new server, we do a 'yum update' so the new server has the latest packages; however this means that just about every server has a different set of package versions - a system rolled out today will have different versions from one rolled out last month, and that will have different versions from one rolled out last year.

...

Has anybody else been faced with this problem, and if so, how did you resolve it?

Let's consider just the problem of 'package version skew' in operations, and come up with a solution for it. [The questioner is also 'starting' with a couple of deployment targets that vary over time because of a poorly considered 'start image' creation ... An obvious approach here is to have a couple of stable base deployment image, and a set of defined transforms to produce a basic engineering workstation or server, per to specification, and is largely uninteresting here]

  1. Set up a local mirror of the centos external mirrors, and call it 'incoming'
  2. Optionally, set a sub-mirror of 'incoming' called 'vault', and mirror in a fashion that does NOT delete old content no longer present on 'incoming'
  3. Set a third mirror called 'testing', which 'picks and chooses' selected packages to test, and their dependencies (see the package: yum-utils for some tools to permit confirming that one has 'closure' of those dependencies)
  4. Test on your pre-deployment 'bench' against 'testing' until you have a change-set you wish to deploy throughout the universe of your boxes under management. Obviously, several 'testing' mirrors can be set up, for differing classes of machines
  5. FINALLY, have a master distribution mirror called 'rtm' that has a change-set from a 'testing' mirror deployed to it. Remove the stock repository specification files from
            /etc/yum.repos.d/ 
    and deploy local variants to taste, that point at 'rtm'. Again, several 'rtm' mirrors can be set up, for differing classes of machines

Something like this to ensure coherency of a enterprise wide deployment is usually mandated by a Change Control Board (explicitly, or implicitly). Obviously, other aspects of an IT policy document will attend to getting the various mirrors properly recoverable in one's backup strategy. [there, the 'testing' mirrors are often NOT covered, as they are ephemeral as to their usefulness, and recoverable out of 'vault' (top down) or from a 'rtm' (bottom up)]

16 August 2010

Mirroring a upstream master with lftp to make a local yum repository

I've been assembling parts for an automated builder, over on the ia64 (Intel Itanium architecture). It has been a while since CentOS had this in the active part of the updates rotation, and I've been working on builders again, as much for relaxation as for anything else. The old binary RPMs aged away to the archive systen under the vault.centos.org hostname, and I don't want to be repeatedly hitting and loading that link for local purposes

As such I set to establishing a local mirror to run providing ftp access to that mirrored content for the local ia64 to draw from in populating a build chroot. I deployed a fresh host, gave it some drive space, and hardened it. Then I installed and turned up vsftpd. And I installed lftp which I use for mirroring remote FTP or WWW sites. It is more lightweight than rsync, and as this is anonymous mirroring, there is no security issue

Setting up the control file to drive the mirror could not be easier. Note: we control permitted shell level access via wrappers and iptables, and use 'keyed' ssh access, and a measure of ~/.ssh/config , ssh-agent , and ssh-add and well named config files, to manage complexity, so it feels like reaching out to a host away in a data center is the same as to one inside the local trusted network. This is not a transitive trust, of course, and remote machines cannot reach in

[herrold@centos-5 localcopy]$ scp root@198.178.231.209:/root/*conf .
... snip ...
[herrold@centos-5 localcopy]$ cat c55-ia64.conf
#
mirror -c -e \
http://vault.centos.org/4.4/os/ia64/ \
/var/ftp/pub/mirror/centos/4/4.4/os/ia64
#
[herrold@centos-5 localcopy]$

and running it just as easy:

[root@ia64-c44-mirror ~]# lftp -f c55-ia64.conf

I took a look to make sure it got a full measure of content:

[root@ia64-c44-mirror ~]# du /var/ftp/pub/mirror/centos/4/4.4/os/ia64
10076 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/images/pxeboot
59680 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/images
2048812 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/CentOS/RPMS
165296 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/CentOS/base
2214116 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/CentOS
7952 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/repodata
25080 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/headers
2307452 /var/ftp/pub/mirror/centos/4/4.4/os/ia64
[root@ia64-c44-mirror ~]#

... and then configured a custom set of yum repository files to point to that archive

[root@ia64-builder hardening]# cd /etc/yum.repos.d/
[root@ia64-builder yum.repos.d]# grep -v ^# pmman-CentOS-Base.repo | grep -v ^$
[base]
name=CentOS-$releasever - Base
baseurl=ftp://ia64-c44-mirror.pmman.net/pub/mirror/centos/4/4.4/os/$basearch/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
priority=1
protect=1
[root@ia64-builder yum.repos.d]#

The proof of the pudding is in the eating, of course, so on that ia64, let's remove and then reinstall a leaf node package (here, the joe text editor)

[root@ia64-builder ~]# yum -d 0 -e 0 clean all
[root@ia64-builder ~]# rpm -e joe
[root@ia64-builder ~]# rpm -q joe
package joe is not installed
[root@ia64-builder ~]# yum -d 0 -e 0 -y install joe
[root@ia64-builder ~]# rpm -q joe
joe-3.1-7.rhel4
[root@ia64-builder ~]#

Seems fine

13 August 2010

stirring the pot with the same old spoon -- rpm options

The same old question came up in an IRC channel yet again today, by a person who will not read, nor Google. Once again, with feeling, here is a quick script to study, as to stripping out excessive content on a Red Hat, CentOS, or Fedora derived 'multi-arch' box

[root@centos-5 bin]# cat ./strip-non-x86_64.sh
#/bin/sh
#
# strip-non-x86_64.sh
#
# strip all non x64_64 content (and also leave behind
# noarch stuff) for more build dependency friendly
# environment
#
# Copyright (c) 2010 R P Herrold
# GPLv3+
# tossed together for education purposes for a blog post
#
# optionally run over or over, after enabling the removal
# or one pass, if you uncomment two places
#
# not designed to be the more efficient one pass solution
# which is possible
#
# NODEPS="--nodeps "
#
# see: man rpm, see '--qf' or: queryformat
# or --querytags
#
for i in `rpm -qa --qf '%{name}.%{arch}\n' | sort | grep -v ^kernel | \
grep -v noarch$ | grep -v "86_64" | grep -v ^gpg-pubkey `; do
echo "${i}"
# rpm -e ${i} ${NODEPS}
done
#
[root@centos-5 bin]# ./strip-non-x86_64.sh
[root@centos-5 bin]#

I am so bone tired of people who want to be fed from the same old dirty spoon

For extra credit see: man yum.conf and add the needed multilib_policy = best line to etc/yum.conf

19 June 2010

Reading the logs, part 3 -- Run your updates

It looks like I'll be writing these for a while as I clean up logfile noise. The earlier pieces are here and here. I say 'noise' here because they are not false positives, but neither are they material, just more a nuisance


One the things every admin who reads log files sees are automated scanners looking for exploits in 'canned' packages that were installed but have not been updated, either because the admin for a given machine has neglected to run updates, because it is not a publicly known exploit, or because the upstream has not yet addressed the matter.

A pattern that has emerged with our PMman with a data center with large contiguous swaths of IP space (and hosts scattered in assignment in that relatively compact range, said hosts reporting to me centrally) is as follows. The hostile exploit scanners are not even trying to be subtle any more -- they simply march sequentially through IP ranges, and inventory if a given weakness is present on every host to which they connect

Today, I focus on one sample report stanza:

--------------------- httpd Begin ------------------------

Requests with error response codes
400 Bad Request
HTTP/1.1: 1 Time(s)
403 Forbidden
/index.html: 1 Time(s)
404 Not Found
/cms/e107_files/e107.css: 1 Time(s)
/db/e107_files/e107.css: 1 Time(s)
/e107/e107_files/e107.css: 1 Time(s)
/e107_files/e107.css: 1 Time(s)
/forum/e107_files/e107.css: 1 Time(s)
/index.php: 1 Time(s)
/manager/html: 1 Time(s)
/portal/e107_files/e107.css: 1 Time(s)
/site/e107_files/e107.css: 1 Time(s)
/web/e107_files/e107.css: 1 Time(s)

---------------------- httpd End -------------------------

and apache can handle this trivially:

#
# file: noexploit.conf
#
# send scanners off to see the wizard
#
Redirect permanent /cms http://127.0.0.1/
Redirect permanent /db http://127.0.0.1/
Redirect permanent /e107 http://127.0.0.1/
Redirect permanent /forum http://127.0.0.1/
Redirect permanent /manager http://127.0.0.1/
Redirect permanent /mysql http://127.0.0.1/
Redirect permanent /phpmyadmin http://127.0.0.1/
Redirect permanent /phpMyAdmin http://127.0.0.1/
Redirect permanent /portal http://127.0.0.1/
Redirect permanent /site http://127.0.0.1/
Redirect permanent /user http://127.0.0.1/
Redirect permanent /users http://127.0.0.1/
Redirect permanent /web http://127.0.0.1/
#

The obvious next step is to package deployment hardenings, and add them to a local RPM repository so that simply running updates, as with yum will get the current best approaches on hardening, en masse, on all the servers

09 March 2009

aid to memory -- 5.3 QA archive .repo file

I mentioned needing to add the local install archive in a test candidate, at directory: /etc/yum.repos.d/ so that updates happen properly


[herrold@centos-5 yum.repos.d]$ cat CentOS-RPH.repo
[53]
name=CentOS-$releasever - 53
baseurl=ftp://xps400/pub/mirror/centos/centos-qa/centos/5.3/os/i386
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
[herrold@centos-5 yum.repos.d]$


Without it, as noted in the twitter feed, we get some security updates from 5.2 [avahi, nss, avahi-compat-libdns_sd and nss-tools], and a unsolveable update set.

28 August 2008

... letters, we get letters


To : (elided)
Cc : CentOS security role account
Attchmnt:
Subject : vulnerability cache poisoning in bind-9.3.4-6.0.2.P1.el5_2
----- Message Text -----
On Thu, 28 Aug 2008, (elided) wrote:

> I haven't found any update to the bind software in the
> repositories. Is it necessary to download the source of bind
> version 9.5.x and compile it?
> S.O CentOS 5.2

No; CentOS uses the RPM packaging management system, and 'yum' (which itself uses the 'rpm' programs). This issue has been addressed already for people running updates regularly.

You do not mention the CVE you are concerned about. This is
the process to see the most recent updates as to CVE's.

The RPM package manager permits you to view what has been
addressed in recent time thus:

~]$ rpm -q --changelog bind | \
  grep -i cve | tac | tail
- added upstream patch for correct SIG handling - CVE-2006-4095
- added fix for #225229 - CVE-2007-0494 BIND dnssec denial of service
- added fix for #224445 - CVE-2007-0493 BIND might crash after
- fixed cryptographically weak query id generator (CVE-2007-2926)
- CVE-2007-6283 (#419421)
- CVE-2008-0122 (small buffer overflow in inet_network)
- CVE-2008-1447
~]$


and then viewing:

http://cve.mitre.org/cve/cve.html


Obviously, I used some command line tools to winnow down the
mass of Changelog; one could feed it to '| less' as well.

Placing: 2008-0122 into the: Search Master Copy of CVE, we
see:

CVE-ID
CVE-2008-1447
(under review)

Learn more at National Vulnerability Database (NVD)
• Severity Rating • Fix Information • Vulnerable Software
Versions • SCAP Mappings

Description
The DNS protocol, as implemented in (1) BIND 8 and 9 before
9.5.0-P1, 9.4.2-P1, and 9.3.5-P1; (2) Microsoft DNS in Windows
2000 SP4, XP SP2 and SP3, and Server 2003 SP1 and SP2; and
other implementations allow remote attackers to spoof DNS
traffic via a birthday attack that uses in-bailiwick referrals
to conduct cache poisoning against recursive resolvers,
related to insufficient randomness of DNS transaction IDs and
source ports, aka "DNS Insufficient Socket Entropy
Vulnerability" or "the Kaminsky bug."

------------------------------------

which is the recent Kaminsky bug. As it is mentioned, we see
it was addressed by CentOS in:

~]$ rpm -q bind
bind-9.3.4-6.0.2.P1.el5_2
~]$ rpm -q bind


Thanks for asking.