RSS구독하기:SUBSCRIBE TO RSS FEED
즐겨찾기추가:ADD FAVORITE
글쓰기:POST
관리자:ADMINISTRATOR

last modified by Imogen Flood-Murphy on 01/05/12 - 07:45

Issue

A Red Hat Enterprise Linux server which is not connected to the Internet, needs to be updated, and has no access to a RHN Satellite or Proxy server.

Resolution

There is a server which is offline and doesn't have any connection to the Internet.

Then we need station (or laptop / virtual machine), which has the same major Red Hat Enterprise Linux version as server and is connected to the Red Hat Network/Proxy/Satellite.

  • Copy the /var/lib/rpm to the station connected to the Internet (you can use USB/CD…)

    scp -r /var/lib/rpm root@station:/tmp/
    
  • Install the download only plugin for yum and createrepo on the machine which is connected to the Internet (Red Hat Network):

    yum install yum-downloadonly createrepo
    yum clean all
    
  • Backup the original rpm directory on the station and replace it with the rpm directory from the "offline" server:

    mv -v /var/lib/rpm /var/lib/rpm.orig
    mv -v /tmp/rpm /var/lib/
    
  • Download updates to /tmp/rpm_updates and return back the /var/lib/rpm

    mkdir -v /tmp/rpm_updates
    yum update --downloadonly --downloaddir /tmp/rpm_updates
    createrepo /tmp/rpm_updates
    rm -rvf /var/lib/rpm
    mv -v /var/lib/rpm.orig /var/lib/rpm
    
  • Transfer the downloaded rpms to the server and update:

    scp -r /tmp/rpm_updates root@server:/tmp/
    ssh root@server
    
    cat > /etc/yum.repos.d/rhel-offline-updates.repo << \EOF
    [rhel-offline-updates]
    name=Red Hat Enterprise Linux $releasever - $basearch - Offline Updates Repository
    baseurl=file:///tmp/rpm_updates
    enabled=1
    EOF
    
    yum upgrade
    

…and the server is updated.

These updates are the same as if "yum update" had been executed on a station that had a connection to the Internet.

Trackback
Reply

last modified by Shane Bradley on 01/19/12 - 11:30

Issue
  • How do you configure an ILO 3 fence device for RHEL Clustering?
Environment
  • Red Hat Cluster Suite 4+
  • Red Hat Enterprise Linux 5 Advanced Platform (Clustering)
  • Red Hat Enterprise Linux Server 6 (with the High Availability Add on)
Resolution

Support for the iLO3 fence device has been added to the fence_ipmilan fence device in the following errata: http://rhn.redhat.com/errata/RHEA-2010-0876.html.

The iLO3 firmware should be a minimum of 1.15 as provided by HP.

On both cluster nodes, install the following OpenIPMI packages used for fencing:

$ yum install OpenIPMI OpenIPMI-tools

Stop and disable the 'acpid' daemon:

$ service acpid stop; chkconfig acpid off

Test ipmitool interaction with iLO3:

$ ipmitool -H <iloip> -I lanplus -U <ilousername> -P <ilopassword> chassis power status

The desired output is:

Chassis Power is on

Edit the /etc/cluster/cluster.conf to add the fence device:

<?xml version="1.0"?> 
<cluster alias="rh5nodesThree" config
version="32" name="rh5nodesThree">
<fencedaemon cleanstart="0" postfaildelay="1" postjoindelay="3"/>
<clusternodes>
<clusternode name="rh5node1.examplerh.com" nodeid="1" votes="1">
<fence>
<method name="1">
<device domain="rh5node1" name="ilo3node1"/>
</method>
</fence>
</clusternode>
<clusternode name="rh5node2.examplerh.com" nodeid="2" votes="1">
<fence>
<method name="1">
<device domain="rh5node2" name="ilo3
node2"/>
</method>
</fence>
</clusternode>
<clusternode name="rh5node3.examplerh.com" nodeid="3" votes="1">
<fence>
<method name="1">
<device domain="rh5node3" name="ilo3node3"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected
votes="3">
<multicast addr="229.5.1.1"/>
</cman>
<fencedevices>
<fencedevice agent="fenceipmilan" powerwait="10" ipaddr="XX.XX.XX.XX" lanplus="1" login="username" name="ilo3node1" passwd="password"/>
<fencedevice agent="fence
ipmilan" powerwait="10" ipaddr="XX.XX.XX.XX" lanplus="1" login="username" name="ilo3node2" passwd="password"/>
<fencedevice agent="fenceipmilan" powerwait="10" ipaddr="XX.XX.XX.XX" lanplus="1" login="username" name="ilo3node3" passwd="password"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>

Test that fencing is successful.  From node1 attempt to fence node2 as follows:

$ fencenode node2

For more information on fencing cluster nodes manually then see the following article: How do you manually call fencing agents from the commandline?

Component
Trackback
Reply

last modified by Takayoshi Kimura on 02/14/12 - 02:36

Issue

The 2.6.11 Linux kernel introduced certain changes to the lpfc (emulex driver) and qla2xxx (Qlogic driver) Fibre Channel Host Bus Adapter (HBA) drivers which removed the following entries from the proc pseudo-filesystem: /proc/scsi/qla2xxx, /proc/scsi/lpfc. These entries had provided a centralized repository of information about the drivers and connected hardware. After the changes, the drivers started storing all this information within the /sys filesystem. Since Red Hat Enterprise Linux 5 uses version 2.6.18 of the Linux kernel it is affected by this change.

Using the /sys filesystem has the advantage that all the Fibre Channel drivers now use a unified and consistent manner to report data. However it also means that the data previously available in a single file is now scattered across a myriad of files in different parts of the /sys filesystem.

One basic example is the status of a Fibre Channel HBA: checking this can now be accomplished with the following command:

# cat /sys/class/scsi_host/host#/state

where host# is the H-value in the HBTL SCSI addressing format, which references the appropriate Fibre Channel HBA. For emulex adapters (lpfc driver) for example, this command would yield:

# cat /sys/class/scsi_host/host1/state
Link Up - Ready:
Fabric

For qlogic devices (qla2xxx driver) the output would instead be as follows:

# cat /sys/class/scsi_host/host1/state
Link Up - F_Port
Environment

Red Hat Enterprise Linux 5

Resolution

Obviously it becomes quite impractical to search through the /sys filesystem for the relevant files when there is a large variety of Fibre Channel-related information of interest. Instead of manual searching, the systool (1) command provides a simple but powerful means of examining and analyzing this information. Detailed below are several commands which demonstrate samples of information which the systool command can be used to examine.

To examine some simple information about the Fibre Channel HBAs in a machine:

# systool -c fchost -v

To look at verbose information regarding the SCSI adapters present on a system:

# systool -c scsihost -v

To see what Fibre Channel devices are connected to the Fibre Channel HBA cards:

# systool -c fcremoteports -v -d

For Fibre Channel transport information:

# systool -c fctransport -v

For information on SCSI disks connected to a system:

# systool -c scsidisk -v

To examine more disk information including which hosts are connected to which disks:

# systool -b scsi -v

Furthermore, by installing the sg3utils package it is possible to use the sgmap command to view more information about the SCSI map. After installing the package, run:

# modprobe sg

sg_map -x

Finally, to obtain driver information, including version numbers and active parameters, the following commands can be used for the lpfc and qla2xxx drivers respectively:

# systool -m lpfc -v

systool -m qla2xxx -v

ATTENTION: The syntax of the systool (1) command differs across versions of Red Hat Enterprise Linux. Therefore the commands above are only valid for Red Hat Enterprise Linux 5.

Trackback
Reply

last modified by Ray Dassen on 08/13/11 - 04:57

Issue

What is the SysRq facility and how do I use it?

Environment
  • Red Hat Enterprise Linux 3, 4, 5, and 6
Resolution
What is the "Magic" SysRq key?

According to the Linux kernel documentation:

It is a 'magical' key combo you can hit which the kernel will respond to regardless of whatever else it is doing, unless it is completely locked up.

The sysrq key is one of the best (and sometimes the only) way to determine what a machine is really doing. It is useful when a system appears to be "hung" or for diagnosing elusive, transient, kernel-related problems.

How do I enable and disable the SysRq key?

For security reasons, Red Hat Enterprise Linux disables the SysRq key by default. To enable it, run:

# echo 1 > /proc/sys/kernel/sysrq

To disable it:

# echo 0 > /proc/sys/kernel/sysrq

To enable it permanently, set the kernel.sysrq value in /etc/sysctl.conf to 1. That will cause it to be enabled on reboot.

# grep sysrq /etc/sysctl.conf
kernel.sysrq = 1

Since enabling sysrq gives someone with physical console access extra abilities, it is recommended to disable it when not troubleshooting a problem or to ensure that physical console access is properly secured.

How do I trigger a sysrq event?

There are several ways to trigger a sysrq event. On a normal system, with an AT keyboard, sysrq events can be triggered from the console with the following key combination:

Alt+PrintScreen+[CommandKey]

For instance, to tell the kernel to dump memory info (command key "m"), you would hold down the Alt and Print Screen keys, and then hit the m key.

Note that this will not work from an X Window System screen. You should first change to a text virtual terminal. Hit Ctrl+Alt+F1 to switch to the first virtual console prior to hitting the sysrq key combination.

On a serial console, you can achieve the same effect by sending a Breaksignal to the console and then hitting the command key within 5 seconds. This also works for virtual serial console access through an out-of-band service processor or remote console like HP iLO, Sun ILOM and IBM RSA. Refer to service processor specific documentation for details on how to send a Breaksignal; for example, How to trigger SysRq over an HP iLo Virtual Serial Port (VSP).

If you have a root shell on the machine (and the system is responding enough for you to do so), you can also write the command key character to the/proc/sysrq-trigger file. This is useful for triggering this info when you are not on the system console or for triggering it from scripts.

# echo 'm' > /proc/sysrq-trigger
When I trigger a sysrq event that generates output, where does it go?

When a sysrq command is triggered, the kernel will print out the information to the kernel ring buffer and to the system console. This information is normally logged via syslog to /var/log/messages.

Unfortunately, when dealing with machines that are extremely unresponsive, syslogd is often unable to log these events. In these situations, provisioning a serial console is often recommended for collecting the data.

What sort of sysrq events can be triggered?

There are several sysrq events that can be triggered once the sysrq facility is enabled. These vary somewhat between kernel versions, but there are a few that are commonly used:

  • m - dump information about memory allocation

  • t - dump thread state information

  • p - dump current CPU registers and flags

  • c - intentionally crash the system (useful for forcing a disk or netdump)

  • s - immediately sync all mounted filesystems

  • u - immediately remount all filesystems read-only

  • b - immediately reboot the machine

  • o - immediately power off the machine (if configured and supported)

  • f - start the Out Of Memory Killer (OOM)

  • w - dumps tasks that are in uninterruptable (blocked) state
Trackback
Reply

last modified by Andrius Benokraitis on 10/04/11 - 12:21

NOTE: The following information has been provided by Red Hat, but is outside the scope of our posted Service Level Agreements (https://www.redhat.com/support/service/sla/ ) and support procedures. The information is provided as-is and any  configuration settings or installed applications made from the  information in this article could make your Operating System unsupported  by Red Hat Support Services. The intent of this article is to provide  you with information to accomplish your system needs. Use the  information in this article at your own risk.

Issue

  • Red Hat Network (RHN) does not contain Red Hat Enterprise Linux 4.9 installation ISOs.[1]

Environment

  • Red Hat Enterprise Linux 4.8 without access to Red Hat Network

Resolution

  • Create a Reference System that connects to Red Hat Network and downloads the latest RHEL 4 packages. Those downloaded packages are then used to upgrade the Target System from Red Hat Enterprise Linux 4.8 to Red Hat Enterprise Linux 4.9 without connecting to Red Hat Network.
  • Reference System: Red Hat Enterprise Linux 4.8 installed and connected to Red Hat Network

  • Target System: Red Hat Enterprise Linux 4.8 installed but not connected to Red Hat Network

  • It is assumed that the Reference System is identical or similar to the Target System, including architecture type. If they cannot be similar, it is recommended that the Reference System be an @everything installation to minimize missed package updates.

Reference System Setup
  • Issue the following commands as root user on the Reference System after installing a base Red Hat Enterprise Linux 4.8 system from Red Hat Network.
  • Ensure there are no previously downloaded RPMs on the system:

rm -rf /var/spool/up2date/*

  • Download all available updates (including those on the "skip" list) from RHN and stores them in /var/spool/up2date :

up2date -u -v -d -f

  • Transfer the downloaded packages to an empty mounted device for later use on the Target System:

cp /var/spool/up2date/*.rpm /media/flash_drive

Target System Setup
  • Perform the following actions as root user on the Target System after completing the previous steps with the Reference System.
  • Mount the device containing the updated packages.

  • Edit the /etc/sysconfig/rhn/sources file with the following:

... #up2date default 
dir rhel49 /media/flash_drive

...

By modifying/including these commands, the default search directory is disabled, and is replaced with the local mounted device.

  • Import the RPM GPG key:

rpm --import /usr/share/rhn/RPM-GPG-KEY

  • Update all packages (including the kernel) on the Target System:

up2date -uf

  • Reboot the system.

[1]  The Red Hat Enterprise Linux 4 Life Cycle entered Production 3 Phase on 16-Feb-2011 with the release of Red Hat Enterprise Linux 4.9. No new features, hardware support or updated installation images (ISOs) are released during the Production 3 phase. Refer to the Red Hat Enterprise Linux Support Policy for details on the life cycle of Red Hat Enterprise Linux releases.

Trackback
Reply

1. PXE BOOT?

사전 부팅 실행 환경 또는 간단히 PXE(Pre-boot eXecution Environment)는 네트워크 인터페이스를 통해 컴퓨터를 부팅할 수 있게 해주는 환경이다.

 

2. PXE 구성 요소

요즘 대부분의 서버들에도 PXE 지원하는 네트워크 카드가 설치 되어 있으니 만약 DVD-ROM이 없거나, Bootable USB가 인식이 되지 않을 때 유용하다.

 

l  PXE Server - 부트 이미지 파일을 포함한 설정정보 교환.

l  TFTP Server - 부트 이미지 파일을 전송.

l  PXE Client - PXE 지원 네트워크 카드 필요(2000년 이후 출시된 제품에는 대부분 장착)

 

3. TFTP 설정법

다운로드 : http://tftpd32.jounin.net/

 

기본 실행 화면



GLOBAL 설정 화면

 

기본적으로 TFTP Server DHCP Server는 켜 있어야 한다.

 

TFTP server boot 이미지를 전송하는 프로토콜이며, 실질적은 FTP 서버가 아님을 기억하자.

 

DHCP PXE 부팅을 하기 위해 IP를 할당 받기 위한 서버이다.

 

 



TFTP
설정

 



TFTP
설정은 기본으로 두면 된다.

TFTP는 기본적으로 UDP 69포트를 사용한다.

 

만약 구성하고 있는 서버에 다수의 대역의 IP를사용 중이라면 Bind TFTP to this address 항목에서 사용할 대역을 설정 해 준다.

이렇게 하면 조금 더 빨리 IP할당을 해 준다.



DHCP
설정 화면


 


DHCP
설정이 가장 중요하다.

 

리눅스 DHCP설정과 별반다를 것이 없지만 여기에서 가장 중요한 것은 pxelinux.0 파일 설정

 

이 파일은 linux 설치 시 syslinux 패키지에 포함 되어 있다. 해당 버전을 다운 받아 놓자.

 

* CAUTION

RHEL 5버전의 pxeliux.0 menu.c32파일을 가지고는 RHEL 6버전의 PXE 부팅이 되지 않는다! 필히 최신버전인 RHEL 6버전의 pxelinux.0 menu.c32파일을 구비 해 두자.

 

 

 

4. 디렉토리 구성

TFTP압축을 풀면 달랑 파일 몇 개만 있다. 이 상태로만 쓸 수 있는 것이 아니며, 하위 폴더에 파일 및 디렉토리를 생성하여야 한다.

 


필수 디렉토리

pxelinux.cfg : syslinux.cfg 파일과 동일한 역할을 하는 디렉토리로, 디렉토리 안에 default 라는 파일이 있어야 한다.(구성은 syslinux.cfg 파일과 100% 동일하니 잘 구성된 syslinux.cfg 파일이 있다면 이름만 바꾸어서 사용해도 된다.)

 

필수 파일

pxelinux.0 : 부트로더 파일

menu.c32 : 설치 시 메뉴를 보여주기 위한 파일

 

기타 설정 파일

ks : kickstart 용 파일을 모아 둠

vesamenu.c32 : 그래픽 한 환경설정을 위한 파일

rhel5.X : rhel 5.X 버전의 ISO를 풀어서 넣어둠

rhel6.X : rhel 6.X 버전의 ISO를 풀어서 넣어둠

 

5. 참조 URL

http://tftpd32.jounin.net/tftpd32_download.html

http://www.syslinux.org/wiki/index.php/PXELINUX

Trackback
Reply
간혹 서버에 설치 하기 위해 돌아다니다 보면 DVD-ROM이 없는 서버가있다.
꼭 그럴경우가 아니라 장애대응을 나갔다가 OS를 재 설치 해야하는경우가 생긴다. 이럴 경우 간단히 ISO 파일을 가지고 설치용 USB를 만들 수 있다.

이전글 :  2012/02/21 - [My Advanced Linux/Advanced Linux] - How do I create a bootable USB pen drive to start a Red Hat Enterprise Linux installation?

하지만 Windows에서도 간단히 만들 수 있게해주는 오픈 소스가 있어 소개 한다.

http://iso2usb.sourceforge.net/ 

ISO를 Bootable USB로 만들어 주는 여러 툴이 있는데, RHEL과 CentOS라면 이 툴을  추천한다.
(이 툴은 UNetbootin을 기반으로 만들어 졌기 때문에 사실상 인터페이스는 똑같다)

 
 

1. Diskimage에 5.x/6.x버전에 해당하는 ISO 이미지를 넣는다

2. Type에 USB와 Dirve을 지정한다.

3. OK를 클릭한다 


생각보다 금방 완료가 된다.
별도로 해 줄 작업이 없이 해당 USB를 꼽으면 기존 DVD를 넣고 설치하는 것과 동일한 모습을 볼 수 있다.


그리고 몇 가지 팁을 더 주자면 syslinux.cfg를 수정하여 멀티 설치버전 이미지를 넣을 수 있다.
syslinux.cfg 수정법은 조금만 검색하면 쉽게 알 수 있으니 별도로 설명 하진 않겠다. 이걸 이용하여 kickstart파일까지 만들면 USB 삽입 후 클릭 한번으로 OS 설치가 가능해진다 ~


Trackback
Reply

last modified by Raghu Udiyar on 12/09/11 - 15:24

Release found: Red Hat Enterprise Linux 5

Problem

You need to install Red Hat Enterprise Linux on a server which does not have a floppy drive or CD-ROM drive, but which does has a USB port.

Assumptions

  • Your network environment is not set up to allow Red Hat Enterprise Linux to be installed completely from the network (through PXE boot). If it is, please make use of this option, as it is more straightforward than the procedure documented here.
  • Your network environment is configured to provide the contents of the Red Hat Enterprise Linux DVDs through a protocol supported by the Red Hat Enterprise Linux installer, such as NFS or FTP.
  • The server's BIOS supports booting from a USB mass storage device like a flash/pen drive.

Solution

The following steps configure a USB pen drive as a boot medium to start the installation of Red Hat Enterprise Linux.

  1. Attach the USB pen drive to a system which is already running Red Hat Enterprise Linux.
  2. Run

    dmesg

  3. From the dmesg output,  identify the device name under which the drive is known to the system.

    Sample messages for a 1 Gb flash disk being recognized as /dev/sdb:

    Initializing USB Mass Storage driver... scsi2 : SCSI emulation for USB Mass Storage devices usb-storage: device found at 5 usb-storage: waiting for device to settle before scanning usbcore: registered new driver usb-storage USB Mass Storage support registered.   Vendor: USB 2.0   Model: Flash Disk        Rev: 5.00   Type:   Direct-Access                      ANSI SCSI revision: 02 SCSI device sdb: 2043904 512-byte hdwr sectors (1046 MB) sdb: Write Protect is off sdb: Mode Sense: 0b 00 00 08 sdb: assuming drive cache: write through SCSI device sdb: 2043904 512-byte hdwr sectors (1046 MB) sdb: Write Protect is off sdb: Mode Sense: 0b 00 00 08 sdb: assuming drive cache: write through sdb: sdb1 sd 2:0:0:0: Attached scsi removable disk sdb 
    sd 2:0:0:0: Attached scsi generic sg1 type 0

    usb-storage: device scan complete

  4. Note: For the remainder of this article, we will assume this device name to be /dev/sdb. Make sure you adjust the device references in the following steps as per your local situation.

  5. At this point, the flash drive is likely to have been automatically mounted by the system. Make sure the flash drive is unmounted. E.g. in nautilus, by right-clicking on the icon for the drive and selecting Unmount Volume.
  6. Use fdisk to partition the flash drive as follows:
    • There is a  single partition.
    • This partition is numbered as 1.
    • Its partition type is set to 'b' (W95 FAT32).
    • It is tagged as bootable.
  7. Format the partition created in the previous step as FAT:

    mkdosfs /dev/sdb1

  8. Mount the partition:

    mount /dev/sdb1 /mnt

  9. Copy the contents of /RedHat/isolinux/ from the first installation CD/DVD onto the flash drive, i.e. to /mnt.

    Note: the files isolinux.binboot.cat and TRANS.TBL are not needed and can thus be removed or deleted.

  10. Rename the configuration file:

    cd /mnt/; mv isolinux.cfg syslinux.cfg

  11. Copy the installer's initial RAM disk /RedHat/images/pxeboot/initrd.img from the first installation CD/DVD onto the flash drive, i.e. to /mnt.

  12. Optional step: To configure any boot settings, edit the syslinux.cfg on the USB flash drive. For example to configure the installation to use a kickstart file shared over NFS, specify the following:

    linux ks=nfs:://ks.cfg

  13. Unmount the flash drive:

    umount /dev/sdb1

  14. Make the USB flash drive bootable. The flash drive must be unmounted for this to work properly.

    syslinux /dev/sdb1

  15. Mount the flash drive again:

    mount /dev/sdb1 /mnt

  16. Install GRUB on the USB flash drive:

    grub-install --root-directory=/mnt /dev/sdb

  17. Verify that the USB flash drive has a /boot/grub directory. If it does not, create the directory manually.

    cd /mnt

    mkdir -p boot/grub

  18. Create the grub.conf file. Below is a sample grub.conf:

    default=0 timeout=5 root (hd1,0) title Red Hat Enterprise Linux installer 
    kernel /vmlinuz

    initrd /initrd.img

  19. Copy or confirm the created grub.conf file is on the /boot/grub/ directory of the USB flash drive.

  20. Unmount the flash drive:

    umount /dev/sdb1

  21. At this point, the USB disk should be bootable.

  22. Attach the USB disk to the system you wish to install Red Hat Enterprise Linux on.
  23. Boot from the USB disk. Refer to the hardware vendor's BIOS documentation for details on changing the order in which devices are checked for booting from.
  24. Once you are booted in the Red Hat Enterprise Linux installer, continue with your network installation of choice.
Trackback
Reply

linux 설치 후 DAEMON을 정리하기 귀찮아서 만든 쉘이라고 하기에도 민망한 스크립트 -_-;;

#!/bin/bash
# created by uzoogom at 2012.2.13

export LC_ALL=C

RED='\e[1;31m'
GREEN='\e[1;32m'
YELLOW='\e[1;33m'
BLUE='\e[1;34m'
NC='\e[0m'

chkall=$(chkconfig --list | egrep "(on|off)" | awk '{print $1}')
chkon=$(cat chklist.txt | egrep -v "^#")

# ALL DEAMON STOP
echo -e "ALL DEAMON STOP ========================================================="
for chkalloff in $chkall
do
chkconfig --level 2345 $chkalloff off
service $chkalloff stop 1>/dev/null
        echo -e "$chkalloff is ${RED}OFF${NC}"
done
echo "Done====================================================================="

echo ""

# SELECT DEAMON START
echo -e "SELECT DEAMON START ====================================================="
for chkselect in $chkon
do
chkconfig --level 2345 $chkselect on
service $chkselect start 1>/dev/null
echo -e "$chkselect is ${GREEN}ON${NC}"
done
echo "Done====================================================================="

echo ""

# selinux disabled
echo "selinux status==========================================================="
setenforce 0 1>/dev/null
perl -pi -e 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
grep "SELINUX=" /etc/selinux/config | egrep -v "^#"

Trackback
Reply

1. ntpd deamon?

ntpd , ntp 서버를 참조해 시스템 클록을 보정하면서, 클라이언트에 시간을 제공하는 데몬.

 

2. ntpd 설정방법

1) /etc/ntp.conf

# restric 설정은 peer 들이 본 서버로 sync 하는 것에 대한 제한

restrict 127.0.0.1

restrict -6 ::1

 

# NTP 서버 설정

server  ntp 서버1

server  ntp 서버2

 

# driftfile , 시간 오차치를 보존해 두는 파일 ntpd 데몬에 의해 자동생성된다.

driftfile /var/lib/ntp/drift

 

# 인증 받기 위한 key가 저장되는 파일

keys /etc/ntp/keys

 

3. ntpd 확인 방법

기본적인 명령어를 통하여 ntpd가 정상적으로 작동을 하는지 확인을 할 수 있다.

1) ntpq -p

예시)

remote

refid

st

t

when

poll

reach

delay

offset

jitter

================================================================

+172.31.x.x

.GPS.

1

u

18

64

377

0.397

-525.92

342.898

*172.31.x.x

.GPS.

1

u

57

64

377

0.326

-707.41

186.074






라벨

설명

remote

Remote Server Host

refid

참조ID(명확하지 않을 때는 0.0.0.0 으로 표시됨)

st

stratum 번호,서버가 어떤 Layer 인지를 표시함.

t

단층(Layer)의 타입 (l:localu:unicastm:multicastb:broadcast)

when

마지막으로 Packet 을 수신한 이후 경과한 시간 (단위:)

poll

Polling 간격 (단위:)

reach

도달가능한 register 8 진수로 표현한 값

delay

Polling insterval 지연예상시간 (단위:밀리초)

offset

단층(Layer) offset (단위:밀리초)

jitter

단층(Layer)의 분산 (단위:밀리초)

 

MARK

설명

' '(reject)

거리가 멀어서 사용되지 않는 서버

'x'(falsetick

falseticker 검사결과 사용되지 않는 서버

'.'(excess)

참조서버가 많아서 사용되지 않는 서버

'-'(outlyer

clustering 검사결과 사용되지 않는 서버

'+'(candidat)

접속 테스트를 합격하여 언제든지 참조가 가능한 서버

'#'(selected)

동기 거리가 멀지만 참조가 가능한 서버

'*'(sys.peer)

동기중인 서버

'o'(pps.peer)

동기중인 서버(동기는 PPS 신호로부터 간접적으로 행해짐

 

 

정상 일 경우

remote

refid

st

t

when

poll

reach

delay

offset

jitter

================================================================

+172.31.x.x

.GPS.

1

u

18

64

377

0.397

-525.92

342.898

*172.31.x.x

.GPS.

1

u

57

64

377

0.326

-707.41

186.074





 

연결이 원할 하지 않을 경우 - 1

remote

refid

st

t

when

poll

reach

delay

offset

jitter

================================================================

x172.31.x.x

.GPS.

1

u

102

256

377

0.366

-1053.6

306.647

x172.31.x.x

.GPS.

1

u

77

256

377

0.354

-1472.2

269.880





 

연결이 원할 하지 않을 경우 - 2

remote

refid

st

t

when

poll

reach

delay

offset

jitter

================================================================

172.31.x.x

.STEP.

16

u

21

64

0

0.000

0.000

0.001

172.31.x.x

.STEP.

16

u

25

64

0

0.000

0.000

0.001





 

2) poll
에대한 부과 설명

각 서버마다 network이나 기타 이유로 인하여 poll은 달라지게 된다. 기본적으로 minpoll64s(26)이고. maxpoll1,024s(210)이다. 이 옵션의 경우에는 ntp.conf에 삽입하여 사용도 가능하다.

 

minpoll 64    ; 16s(24)보다 작을 수 없다.

maxpoll 1024  ; 36.4h(217) 보다 클 수 없다.

4. 기타 참조

1) clocksource

# cat /sys/devices/system/clocksource/clocksource0/available_clocksource

acpi_pm jiffies tsc pit  사용 가능한 clocksource

 

# cat /sys/devices/system/clocksource/clocksource0/current_clocksource

tsc → 현재 사용 중인 clocksource

 

2) Tickless Linux Kernels

2.6.18 이 후 커널에서는 tick counting을 사용하지 않기 시작했고, 몇몇 새로운 커널에서는 주기적인 타이머 인터럽트 프로그래밍이 아닌 불규칙한 인터럽트를 사용하게 되었다. 이것이 바로 tickless kernels라고 부른다. tickless kernels에서는 PIT를 사용하는 것이 아닌 Local APIC timer interrupts를 사용한다. 이것을 이용하여 정상적으로 시간 카운트가 되는지 확인 하는 방법은 하기와 같다.

 

Redhat Enterprise Linux 4: HZ = 1000Hz

Redhat Enterprise Linux 5: HZ = 1000Hz

 

# cat /proc/interrupts ; sleep 10; cat /proc/interrupts

 

CPU0       CPU1       

0:     125251      79291    IO-APIC-edge  timer

1:        591        585    IO-APIC-edge  i8042

8:          0          0    IO-APIC-edge  rtc

9:          0          0    IO-APIC-level  acpi

12:        67          8    IO-APIC-edge  i8042

14:       753        643    IO-APIC-edge  ide0

169:     2840        142    IO-APIC-level  ioc0

177:      748         19    IO-APIC-level  eth0

NMI:       43         35  

LOC:     204282     204830

ERR:        0

MIS:        0         

         CPU0       CPU1       

0:     134539      80039    IO-APIC-edge  timer

1:        592        585    IO-APIC-edge  i8042

8:          0          0    IO-APIC-edge  rtc

9:          0          0    IO-APIC-level  acpi

12:        67          8    IO-APIC-edge  i8042

14:       771        715    IO-APIC-edge  ide0

169:     2840        147    IO-APIC-level  ioc0

177:      800         19    IO-APIC-level  eth0

NMI:       43         36  

LOC:     214314     214862

ERR:        0

MIS:        0

 

before = 125251 + 79291 = 204542

after = 134539 + 80039 = 214578

timer rate = (214578 - 204542) / 10 seconds = 1003/sec

 

5. 참조 URL

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005802

 

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1006113&sliceId=1&docTypeID=DT_KB_1_1&dialogID=123674026&stateId=0%200%20132037035

 

http://kldp.org/node/97268

 

 

Trackback
Reply
우주곰:지구곰이 아닙니다.
지구곰이 아닙니다.
Categories (190)
Information (5)
About uzoogom (5)
My Advanced Linux (73)
Learning Linux (96)
OperatingSystem (5)
Databases (4)
OpenSource (1)
Tips! (1)
«   2024/04   »
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30