interesting: heisenberg effect: xload, ps commands using up sys resources limit memory limit cputime to 200 minutes: limit -h cputime 200m MAXUSERS: 32 configured at a very quiet time saturday afternoon 24 hours after reboot. 453 oas> uptime 6:19pm up 1 day, 47 mins, 17 users, load average: 1.45, 1.41, 1.05 at a more busy time monday morning: 318 oas> uptime 11:34am up 2 days, 19:02, 32 users, load average: 2.36, 1.12, 0.81 prevent xload windows etc ... script for accounting daily: p 36 benchmark: filesystem speed: p 50 simulate interactive load: p 51 netstat script which takes samples regularly p 184 script watching vmstat (pageouts): p 91 (maybe not necessary for sun vmstat -s 5) use dirs instead of ps (alias for all users?) avoid using ps, xload windows etc do not run default xbiff window avoid long search paths (take some entries out until needed?) smaller directory size (problem bin ..) daemons: accounting: acctcms comsat: "you have new mail" disabled routed: "only default route" ... how reliable is our routing sys? sendmail: all mail services: low priority (high memory) should go to a different machine: crossmount /usr/spool/mail and /aliases (but mh has to be on local machine) put sendmail in -q 30m to 3h (queue checking for stuck mail) prevent sendmail from mail delivering when load is high -oxlimit e.g. in rc.local sendmail -bd -q3h -ox5 & where are the man pages for netscape? are there any? nice netscape nice print jobs nice sendmail memory: use shared libs for your most demanding jobs: 411 oas> ldd /usr/local/bin/X11/netscape -lc.1 => /usr/lib/libc.so.1.9 -ldl.1 => /usr/lib/libdl.so.1.0 413 oas> ldd /usr/local/bin/emacs /usr/local/bin/emacs: statically linked but: 582 oas> file /usr/local/bin/X11/netscape /usr/local/bin/X11/netscape: sparc demand paged dynamically linked executable 583 oas> file /usr/local/bin/emacs /usr/local/bin/emacs: sparc demand paged executable not stripped programs who used a lot of cpu but I do not know: domainna rpc.rsta in.comsa Oas kernel configuration: 407 oas> pstat -T 302/1888 files 644/1018 inodes 78/522 processes 15112/327596 swap why double file table than inode table (almost one to one relationship) probably need more inodes than files a lot of files it seems not enough inodes quite a lot processes process per user (kernel parameter) number of mounted filesystems possible (sufficient?) # This kernel supports about 16 users. Count one user for each # timesharing user, one for each window that you typically use, and one # for each diskless client you serve. This is only an approximation used # to control the size of various kernel data structures, not a hard limit. oas has kernel parameter standard for workstation but not file server: MAXUSERS 32 (too small) number users (xterm 2 users ..) MAXUPRC 25 (too many) number of proc per user (file server not many) or too few?) NMOUNT 40 (too small?) my book says: maxusers = 1.5 [ basic sys (2) + user active sim (15) + NFS client boots (?) + .5 * NFS client export fs to (?) ] + round up NPROC (10 + 16 * MAXUSERS) (was increased for heavy X work .. needs higher?) X-window needs at least 2 processes nfile probably to small: #if NWIN > 0 /* if using the window system, will need more open files */ int nfile = 16 * (NPROC + 16 + MAXUSERS) / 5 + 64; #else int nfile = 16 * (NPROC + 16 + MAXUSERS) / 10 + 64; (this is what we have!!) #endif # The "open EEPROM" pseudo-device is required to support the # eeprom command. pseudo-device openeepr # onboard configuration NVRAM # The following is needed to support the Sun dialbox. pseudo-device db # dialbox support # The next few are for SunWindows support, needed to run SunView 1. pseudo-device win256 # window devices, allow 256 windows pseudo-device dtop4 # desktops (screens), allow 4 pseudo-device ms # mouse support # The following option adds support for SunView 1 journaling. options WINSVJ # SunView 1 journaling support # The following option is only needed if you want to use the trpt # command to debug TCP problems. options TCPDEBUG # TCP debugging, see trpt(8) # The following options are for accounting and auditing. SYSAUDIT # should be removed unless you are using the C2 security features. options SYSACCT # process accounting, see acct(2) & sa(8) options SYSAUDIT # C2 auditing for security sendmail can be a problem: outsource to taal? emacs can be a problem describe uptime (user specific "w") 7:45pm up 1 day, 2:13, 17 users, load average: 1.07, 0.57, 0.50 users (kernel user ... should be used when empty to calculate sys process) 1.07, 0.57, 0.50 average number of jobs in the run queue over the last 1, 5 and 15 minutes describe ps: oas> ps -aux | more USER PID %CPU %MEM SZ RSS TT STAT START TIME COMMAND root 0 0.0 0.0 0 0 ? D Oct 27 0:09 swapper root 1 0.0 0.0 52 0 ? IW Oct 27 0:00 /sbin/init - root 2 0.0 0.0 0 0 ? D Oct 27 0:00 pagedaemon root 154 0.0 0.0 268 0 ? IW Oct 27 0:00 -Waiting for connection bin 124 0.0 0.0 36 0 ? IW Oct 27 0:00 ypbind sergey 1043 0.0 0.1 200 92 p3 S Oct 27 0:00 xbiff matt 1903 0.0 1.0 5088 1576 p4 S 20:11 2:12 emacs root 120 0.0 0.0 56 0 ? IW Oct 27 0:00 ypxfrd root 115 0.0 0.0 68 0 ? IW Oct 27 0:02 portmap PID process ID %CPU percentage %MEM percentage SZ virtual mem size of job RSS mem size of job in mem TT lists tty job runs from STAT R job runnable T currently stopped P waiting for page in (rare to see since UNIX too fast .. inquire vmstat) D waiting for disk (.. inquire vmstat) S sleeping for < 20s (otherwise idle) I idle (candidate to be swapped out ... when idle) Z zombie (no problem until too many) cpu usage: sa -m | more #of procs cpu time io integral: memory cpu (storage integral) christin 50 6.57cpu 1463578tio 2912422k*sec root 968 4.12cpu 814194tio 828162k*sec biondo 626 3.31cpu 1178879tio 1012745k*sec martin 63 2.37cpu 1577404tio 804765k*sec jun 69 0.78cpu 136025tio 205377k*sec tengli 147 0.30cpu 106665tio 48823k*sec mihai 17 0.35cpu 85327tio 105058k*sec non-cumulative (only stat since last sa -s) sa -i | more 2530 90226.48re 18.62cp 2240avio 5364k 4 9296.56re 5.60cp 352026avio 8469k netscape 6 19010.72re 3.36cp 36976avio 3849k in.rlogi 5 10528.74re 1.59cp 58798avio 5357k xterm 22 30.08re 1.50cp 58364avio 6508k cp 128 23793.56re 1.28cp 511avio 2807k csh 33 16655.44re 1.23cp 8867avio 3987k ***other 2 81.06re 0.74cp 27567avio 7324k idraw 6 259.26re 0.46cp 26784avio 5665k xv 65 1879.39re 0.37cp 3553avio 2268k vi 27 2213.34re 0.35cp 9139avio 2886k latex 59 141.39re 0.23cp 1avio 3985k rm 39 0.33re 0.16cp 15544avio 956k ps ordered by memory usage: identifies big memory hogs (idraw of oas!) 484 oas> sa -k | more 2887 90262.70re 19.54cp 2026avio 5175k 4 9296.56re 5.60cp 352026avio 8469k netscape 2 81.06re 0.74cp 27567avio 7324k idraw 43 30.11re 1.51cp 29916avio 6464k cp 6 259.26re 0.46cp 26784avio 5665k xv 5 10528.74re 1.59cp 58798avio 5357k xterm 65 141.40re 0.24cp 1avio 3932k rm 6 19010.72re 3.36cp 36976avio 3849k in.rlogi 38 16655.51re 1.29cp 8470avio 3826k ***other 2 0.05re 0.05cp 276avio 2963k newawk 27 2213.34re 0.35cp 9139avio 2886k latex 9 2271.39re 0.12cp 5491avio 2877k xtex 2 12.86re 0.05cp 31800avio 2846k rn 132 23795.57re 1.32cp 522avio 2746k csh 5 0.39re 0.06cp 1avio 2717k du 6 370.08re 0.05cp 2736avio 2637k gs 4 0.63re 0.48cp 20603avio 2232k acctcms 67 1884.51re 0.39cp 3456avio 2184k vi 22 43.96re 0.11cp 6873avio 1650k mail 6 6.61re 0.04cp 8903avio 1644k ftpd 71 20.60re 0.12cp 115avio 1335k sendmai* 143 1.01re 0.10cp 535avio 1285k grep 8 0.17re 0.10cp 2145avio 963k nroff ordered by memory cpu integral (memory usage over time) sa -K | more 2887 90262.70re 19.54cp 2026avio 6065728k*sec 4 9296.56re 5.60cp 352026avio 2843702k*sec netscape 6 19010.72re 3.36cp 36976avio 776918k*sec in.rlogi 43 30.11re 1.51cp 29916avio 584714k*sec cp 5 10528.74re 1.59cp 58798avio 511151k*sec xterm 2 81.06re 0.74cp 27567avio 326254k*sec idraw 38 16655.51re 1.29cp 8470avio 296700k*sec ***other 132 23795.57re 1.32cp 522avio 218111k*sec csh 6 259.26re 0.46cp 26784avio 157110k*sec xv 4 0.63re 0.48cp 20603avio 64114k*sec acctcms 27 2213.34re 0.35cp 9139avio 60380k*sec latex 65 141.40re 0.24cp 1avio 55477k*sec rm cpu split in system and user time: 2887 90262.70re 8.67u 10.87s 2026avio 5175k 4 9296.56re 3.92u 1.68s 352026avio 8469k netscape 6 19010.72re 0.38u 2.99s 36976avio 3849k in.rlogi 5 10528.74re 0.69u 0.90s 58798avio 5357k xterm 43 30.11re 0.04u 1.47s 29916avio 6464k cp 132 23795.57re 0.62u 0.70s 522avio 2746k csh 38 16655.51re 0.58u 0.71s 8470avio 3826k ***other 2 81.06re 0.48u 0.26s 27567avio 7324k idraw 4 0.63re 0.46u 0.02s 20603avio 2232k acctcms 6 259.26re 0.18u 0.28s 26784avio 5665k xv 67 1884.51re 0.21u 0.18s 3456avio 2184k vi size netscape and emacs: 427 oas> size /usr/local/bin/emacs text data bss dec hex 1441792 614400 0 2056192 1f6000 428 oas> size /usr/local/bin/X11/netscape text data bss dec hex 3923968 376832 26888 4327688 420908 Text size would be shared if program dynammically linked ps -au USER PID %CPU %MEM SZ RSS TT STAT START TIME COMMAND sergey 1043 0.0 0.0 200 92 p3 IW 18:44 0:00 xbiff matt 1903 0.0 0.7 5084 1148 p4 S 20:11 1:31 emacs biondo 1065 0.0 0.7 224 1148 p2 S 18:45 0:00 xload bob 317 0.0 0.8 208 1172 p0 S 17:35 0:00 xload matt 1339 0.0 0.2 200 264 p4 S 19:18 0:01 -csh (csh) bob 257 0.0 0.0 168 0 p0 IW 17:34 0:00 -csh (csh) biondo 934 0.0 0.0 168 0 p2 IW 18:44 0:00 -csh (csh) sergey 979 0.0 0.0 136 0 p3 IW 18:44 0:00 -csh (csh) do not get scared of emacs (this is while I type this report in emacs on oas). the size of emacs big: virtual size of job. the residentsize is just 1MB (as much as a xload window) currently allocated memory. Eventually will be swapped out. Notice that IW idle and therefore swapped out. You want to watch for STAT P D problems. swapping perfmeter not working on my window setup: sunview only ... mihai? 431 oas> ps -au | more USER PID %CPU %MEM SZ RSS TT STAT START TIME COMMAND matt 1903 0.0 0.7 5084 1080 p4 S 20:11 1:31 emacs not used for less 20s matt 1903 0.0 0.0 5084 0 p4 IW 20:11 1:35 emacs idle swapped out matt 19255 0.0 2.9 1600 4384 p4 S 17:20 0:03 netscape startup matt 19255 11.5 2.9 1600 4384 p4 S 17:20 0:03 netscape while idle running matt 19255 6.7 2.9 1620 4408 p4 S 17:20 0:05 netscape while connecting to AU matt 19255 8.2 3.0 1832 4652 p4 S 17:20 0:07 netscape while connecting to AU there is a way to turn off the standard paging algorithm for processes if these paging algorithms are very inconvenient (usually when memory is accessed randomly). That is shown by an A in the 4th field of STAT: I do not know how to turn it on. Maybe a good idea for netscape? I doubt it. 448 oas> ps -v PID TT STAT TIME SL RE PAGEIN SIZE RSS LIM %CPU %MEM COMMAND 19338 p4 S 0:03 0 99 12 1600 4172 xx 0.0 2.7 netscape 1903 p4 S 1:38 3 3 524 5084 1036 xx 0.0 0.7 emacs 19451 p4 R 0:00 0 0 0 300 552 xx 0.0 0.4 ps 1339 p4 S 0:01 0 0 15 200 256 xx 0.0 0.2 csh shows # of pagein (netscape only started up) emacs used for editing. when oas was empty: 451 oas> iostat 5 tty dk0 dk1 dk2 dk4 cpu tin tout bps tps msps bps tps msps bps tps msps bps tps msps us ni sy id 1 344 10 2 0.0 0 0 0.0 1 0 0.0 1 0 0.0 2 0 6 92 0 125 2 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 5 0 10 85 when oas was under medium load: 365 oas> iostat 5 tty dk0 dk1 dk2 dk4 cpu tin tout bps tps msps bps tps msps bps tps msps bps tps msps us ni sy id 1 899 14 2 0.0 11 1 0.0 8 1 0.0 1 0 0.0 7 0 9 84 0 136 1 1 0.0 0 0 0.0 0 0 0.0 0 0 0.0 62 0 38 0 0 69 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 64 0 36 0 0 60 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 67 0 33 0 0 89 50 7 0.0 0 0 0.0 0 0 0.0 0 0 0.0 59 0 41 0 3 80 11 2 0.0 0 0 0.0 0 0 0.0 0 0 0.0 52 0 48 0 0 72 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 75 0 25 0 0 52 2 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 71 0 29 0 only at right: us: % of time sys in user state executing user commands ni with nice sy in sys state (sys calles, kernel code, scheduling overhead). id idle if idle time alwaye 0%: possible cpu bottleneck. if heavy load, but idle 25%: faster disk, more memory 50% sys IO: look at bps (disk transfer in kB) & tps (#of disk transfers) of each disk: wrong mode: character versus block? all sys on one disk: reorganize file sys? are we disk bound? Memory vmstat ignore 1st line: unreliable easy load sunday: 524 oas> vmstat -S 5 procs memory page disk faults cpu r b w avm fre si so pi po fr de sr d0 d1 d2 d3 in sy cs us sy id 0 0 0 0 46436 144565 0 14 1 9 0 2 2 0 0 0 69 682 131 4 7 89 1 0 0 0 46344 25 0 0 0 0 0 0 0 0 0 0 40 289 77 1 1 98 0 0 0 0 46272 5 0 0 0 0 0 0 0 0 0 0 23 237 62 0 0 100 medium load monday: 367 oas> vmstat -S 5 procs memory page disk faults cpu r b w avm fre si so pi po fr de sr d0 d1 d2 d3 in sy cs us sy id 2 0 0 0 5696 485393 2 7 6 0 0 2 2 1 1 0 101 1404 165 7 9 84 1 0 0 0 6336 20 0 0 0 0 0 0 2 0 0 0 132 7193 178 55 45 0 2 0 0 0 6512 20 0 0 0 0 0 0 11 0 0 0 113 7702 112 62 38 0 1 0 0 0 6548 10 0 0 0 0 0 0 0 0 0 0 69 8159 89 64 36 0 1 0 0 0 6344 10 0 20 0 0 16 0 1 0 0 0 108 8109 124 64 36 0 1 0 0 0 6132 10 0 0 0 0 0 0 0 0 0 0 71 8282 95 68 32 0 si swap ins swap outs should be almost always zero!! 519 oas> vmstat 5 20 procs memory page disk faults cpu r b w avm fre re at pi po fr de sr d0 d1 d2 d3 in sy cs us sy id 1 0 0 0 2048 0 5 14 1 8 0 1 2 0 0 0 69 685 132 4 7 89 0 0 0 0 2048 0 11 92 0 160 0 35 0 0 0 0 149 396 122 1 5 93 r column: runable jobs b column: blocked process waiting for io: disk problem if large # constantly (idle CPU?) w column: swapped out jobs 1 (Trouble!!) fr column: free memory pi: starting job (connected with disk activity on d0) po: paging: possible trouble kernel parameter LOTSFREE decides when to page. healthy: 70% user time, 30% sys time, 1-2 % idle time things to do when mem problem sustained: add memory, reduce file sys buffer, restrict memory traffic shared libraries (use the same txt physical memory) mbufs (network claims memory and does not free): hits early plateau and freed when rebooting 537 oas> netstat -m 684/928 mbufs in use: 149 mbufs allocated to data 29 mbufs allocated to packet headers 216 mbufs allocated to socket structures 280 mbufs allocated to protocol control blocks 4 mbufs allocated to routing table entries 1 mbufs allocated to socket names and addresses 2 mbufs allocated to zombie process information 3 mbufs allocated to interface addresses 20/36 cluster buffers in use 152 Kbytes allocated to network (69% in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines streams allocation: cumulative allocation current maximum total failures streams 40 41 366 0 queues 142 146 1425 0 mblks 88 2090 30752472 0 dblks 88 2089 7903961 0 streams buffers: external 0 0 0 0 within-dblk 0 46 1600032 0 size <= 16 12 18 22584 0 size <= 32 16 20 1549094 0 size <= 128 60 2060 3890239 0 size <= 512 0 2 549843 0 size <= 1024 0 4 197429 0 size <= 2048 0 1 94737 0 size <= 8192 0 1 1 0 size > 8192 0 0 0 0 swap space 540 oas> pstat -s 22624k allocated + 97844k reserved = 120468k used, 207128k available 320MB available rule of fist: 4 times the memory: how much memory do we have? 80MB? VERY INACCURATE RULE always half of memory should be empty to be comfortable if problem: increase swap space distribute swap partitions on as many disks as possible: load balancing /usr/adm/messages Oct 28 19:41:23 oas automount[188]: host spur-cddi not responding Oct 28 19:41:23 oas automount[188]: host spur-cddi not responding Oct 28 20:17:21 oas automount[188]: host spur-cddi not responding Oct 28 20:17:21 oas last message repeated 2 times 100 console messages just today ... Oct 27 11:03:22 oas vmunix: NFS write error: on host taal remote file system full Oct 27 11:03:25 oas vmunix: NFS write error: on host taal remote file system full Oct 27 11:03:27 oas vmunix: NFS write error: on host taal remote file system full Oct 27 05:23:31 oas su: 'su root' succeeded for christin on /dev/ttyp7 Oct 27 05:24:59 oas vmunix: NFS write error: on host taal remote file system full Oct 27 05:25:03 oas last message repeated 2 times Oct 26 18:17:38 oas sendmail[2284]: NOQUEUE: SYSERR: /var/yp/sep.Stanford.EDU/mail.aliases: line 149: missing colon Oct 26 05:33:03 etna.Stanford.EDU sendmail[13287]: AA08516: SYSERR: Out of memory!!: Not enough memory Oct 24 10:56:46 etna.Stanford.EDU sendmail[26167]: AA26166: SYSERR: Cannot exec /bin/sh: Not enough memory Oct 24 10:58:02 etna.Stanford.EDU sendmail[15290]: AA08365: SYSERR: Out of memory!!: Not enough memory Oct 24 10:58:03 etna.Stanford.EDU sendmail[22303]: AA16862: SYSERR: xfAA16862: line 9: Out of memory!!: Not enough memory Oct 21 17:31:37 oas sendmail[5713]: AA05713: SYSERR: prescan: too many tokens Oct 21 17:31:37 oas sendmail[5715]: AA05713: SYSERR: prescan: too many tokens Oct 21 17:31:37 oas sendmail[5715]: AA05713: SYSERR: prescan: too many tokens Oct 21 17:31:41 oas sendmail[5715]: AA05715: SYSERR: prescan: too many tokens Oct 21 17:31:41 oas sendmail[5715]: AA05715: SYSERR: prescan: too many tokens Oct 27 17:33:30 oas ntpd[192]: /usr/local/bin/ntpd version $Revision: 3.4.1.9 $ Oct 27 17:33:30 oas ntpd[192]: patchlevel 13 Oct 28 00:18:23 koko.Stanford.EDU vmunix: NFS server oas not responding still trying Oct 28 00:18:23 koko.Stanford.EDU vmunix: NFS server oas ok Oct 27 17:33:58 oas inetd[238]: shell/tcp: unknown service Oct 27 17:33:58 oas lpd[241]: printer/tcp: unknown service Oct 27 17:34:01 oas inetd[238]: eklogin/tcp: unknown service Oct 27 17:17:28 oas last message repeated 2 times Oct 27 17:25:35 oas su: 'su root' succeeded for bob on /dev/ttyp9 Oct 27 17:31:06 oas shutdown: reboot by bob Oct 27 17:31:09 oas syslogd: going down on signal 15 Oct 27 17:33:27 oas vmunix: NFS server (pid194@/homes) not responding still trying Oct 27 17:33:27 oas vmunix: mem = 163356K (0x9f87000) Oct 27 17:33:27 oas vmunix: avail mem = 157130752 disk: adopt block size to usage of disk experiment with roatational delay (???) (set reserved space to more than default 10% ... optimizes access time ... interesting) make sure that no disk is full or close to full (each io than has to seek because of fragmentation) /dev/sd5a 1688131 1519326 0 100% /home/oas/promax1 /dev/sd5d 1688131 1511671 7647 99% /home/oas/promax2 /dev/sd5e 1688131 1471445 47873 97% /home/oas/promax3 /dev/sd2d 1688131 1426013 93305 94% /home/oas/biondo /dev/sd2f 1688131 1382007 137311 91% /home/oas/nizar on other systems: taal? robson? spur? remove files, reformat disks network: 621 oas> nfsstat Client rpc: calls badcalls retrans badxid timeout wait newcred timers 339447 259 11 143 268 0 0 2027 badixd is of same magnitude as timeout --> a NFS server is overloaded Are there any diskless terminals hanging of oas? if yes: maybe better else? do they swap and page? they are using the network then! #/usr/sbin/spray oas sending 1162 packets of lnth 86 to oas ... in 10.1 seconds elapsed time, 1112 packets (95.70%) dropped by oas Sent: 114 packets/sec, 9.6K bytes/sec Rcvd: 4 packets/sec, 423 bytes/sec This means spur can generate packets so much faster that oas seems to drop them. oas is slow in reacting to the network (or very busy: but this was when oas had: 157 /var/adm# 638 oas> uptime 1:32am up 2 days, 9 hrs, 22 users, load average: 0.01, 0.00, 0.00 could it be that the data arrives at oas corrupted and oas refuses to acceept the data? before spur spray command: 643 oas> netstat -s | grep drop 2536 connections closed (including 52 drops) 60 embryonic connections dropped 0 connections dropped by rexmit timeout 35 connections dropped by keepalive 0 fragments dropped (dup or out of space) 5 fragments dropped after timeout 1278 ip input queue drops spur> /usr/sbin/spray oas sending 1162 packets of lnth 86 to oas ... in 10.1 seconds elapsed time, 1112 packets (95.70%) dropped by oas Sent: 114 packets/sec, 9.6K bytes/sec Rcvd: 4 packets/sec, 424 bytes/sec 646 oas> netstat -s | grep drop 2541 connections closed (including 52 drops) 60 embryonic connections dropped 0 connections dropped by rexmit timeout 35 connections dropped by keepalive 0 fragments dropped (dup or out of space) 5 fragments dropped after timeout 1900 ip input queue drops RESULT: 5 connections closed (including 52 drops) 622 ip input queue drops (!!!) cannot read incoming stream fast enough ... but this accounts for only half of the packets dropped ... where is the rest? assumption (p 195) interface of oas refuses data because it is corrupted ... but why do I not have Ierrs increased on oas? (if data is corrupted, work analyzer). The nfstat -c tells us that retrans were small (below 5%) nothing to worry ... but where are the packets? 660 oas> /usr/etc/spray taal sending 1162 packets of lnth 86 to taal ... in 0.4 seconds elapsed time, 460 packets (39.59%) dropped Sent: 3220 packets/sec, 270.5K bytes/sec Rcvd: 1945 packets/sec, 163.4K bytes/sec 659 oas> /usr/etc/spray spur sending 1162 packets of lnth 86 to spur ...SPRAYPROC_CLEAR RPC: Program not registered 658 oas> /usr/etc/spray hakone sending 1162 packets of lnth 86 to hakone ... in 10.3 seconds elapsed time, 866 packets (74.53%) dropped Sent: 112 packets/sec, 9.4K bytes/sec Rcvd: 28 packets/sec, 2.4K bytes/sec 487 spur> /usr/sbin/spray hakone sending 1162 packets of lnth 86 to hakone ...spray: send error RPC_CANT_SENDspray: send error RPC_CANT_SEND in 10.2 seconds elapsed time, 1005 packets (86.49%) dropped by hakone Sent: 114 packets/sec, 9.6K bytes/sec Rcvd: 15 packets/sec, 1.3K bytes/sec Our entire network sucks ... ??? what do these RPC_CANT_SEND mean??? fals alarm: when data included the system behaves fine: 491 spur> /usr/etc/spray oas -l 1000 sending 99 packets of lnth 1002 to oas ... in 0.1 seconds elapsed time, no packets dropped by oas 1128 packets/sec, 1104.0K bytes/sec 301 hakone> /usr/etc/spray oas -l 1000 sending 99 packets of lnth 1002 to oas ... no packets dropped by oas 1091 packets/sec, 1093187 bytes/sec 663 oas> /usr/etc/spray hakone -l 1000 sending 100 packets of lnth 1002 to hakone ... in 0.1 seconds elapsed time, 23 packets (23.00%) dropped Sent: 1177 packets/sec, 1152.0K bytes/sec Rcvd: 906 packets/sec, 887.0K bytes/sec conclusion is very slow on networking ... oas is just very slow networking (no memory, no cpu shortage). what can we do? change write and read buffer at all clients mounting oas disks (p 197). what is the default? Defaults for rsize and wsize are set inter- nally by the system kernel. what are they? reset timeoout for remote systems (especially cddi) timeo=15 in /etc/fstab heavy load: 28 oas> uptime 11:32am up 17:40, 35 users, load average: 13.33, 8.45, 4.48 27 oas> pstat -T 593/1888 files 705/1018 inodes 186/522 processes 45432/327596 swap 31 oas> sa -m | more root 142 2.53cpu 72255tio 42760k*sec jon 2 0.00cpu 180tio 22k*sec harlan 3 0.00cpu 301tio 31k*sec biondo 1 0.00cpu 94tio 21k*sec diane 1 0.00cpu 85tio 20k*sec christin 3 0.00cpu 351tio 18k*sec nizar 1 0.00cpu 94tio 12k*sec bob 3 0.00cpu 281tio 38k*sec curt 1 0.00cpu 87tio 11k*sec ee 4 0.00cpu 507tio 34k*sec ps -aux | more USER PID %CPU %MEM SZ RSS TT STAT START TIME COMMAND matt 26949 7.7 0.4 344 612 p1 R 11:32 0:00 ps -aux root 1 0.0 0.0 52 0 ? IW 17:51 0:00 /sbin/init - root 0 0.0 0.0 0 0 ? D 17:51 0:06 swapper root 2 0.0 0.0 0 0 ? D 17:51 0:00 pagedaemon root 180 0.0 0.0 16 0 ? IW 17:52 0:00 /bin/screenblank -d 1800jon 526 0.0 0.0 200 0 p3 IW 17:56 0:00 bash -i bin 124 0.0 0.0 36 0 ? IW 17:52 0:00 ypbind matt 316 0.0 0.1 104 188 p1 S 17:55 0:00 -csh (csh) root 122 0.0 0.0 44 0 ? IW 17:52 0:00 /usr/etc/rpc.yppasswdd /root 115 0.0 0.0 68 0 ? IW 17:52 0:02 portmap root 120 0.0 0.0 56 0 ? IW 17:52 0:00 ypxfrd root 126 0.0 0.0 40 0 ? IW 17:52 0:00 keyserv root 118 0.0 0.1 156 220 ? S 17:52 5:56 ypserv oas under heavy load late sunday night. noone is on the machine: 187 /home/oas/sep/matt/Gmake# /usr/etc/spray hakone -l 1000 sending 100 packets of lnth 1002 to hakone ... in 0.1 seconds elapsed time, 18 packets (18.00%) dropped Sent: 1103 packets/sec, 1079.4K bytes/sec Rcvd: 904 packets/sec, 885.1K bytes/sec 514 spur> /usr/etc/spray oas -l 1000 sending 99 packets of lnth 1002 to oas ... in 0.1 seconds elapsed time, 24 packets (24.24%) dropped by oas Sent: 867 packets/sec, 848.6K bytes/sec Rcvd: 656 packets/sec, 642.9K bytes/sec Client rpc: calls badcalls retrans badxid timeout wait newcred timers 1887450 906 71 587 974 0 0 1780 181 /home/oas/sep/matt/Gmake# vmstat 5 20 procs memory page disk faults cpu r b w avm fre re at pi po fr de sr d0 d1 d2 d3 in sy cs us sy id 0 6 0 0 92240 0 20 7 0 0 0 3 2 1 0 0 168 7767 184 17 27 56 0 4 0 0 92284 0 1 0 0 0 0 0 0 0 0 0 19 50 29 0 0 100 0 4 0 0 92308 0 0 0 0 0 0 0 0 0 0 0 16 45 28 0 0 100 0 4 0 0 92308 0 0 0 0 0 0 0 0 0 0 0 13 26 27 0 0 100 0 4 0 0 92292 0 0 0 0 0 0 0 0 0 1 0 23 60 30 1 1 99 0 5 0 0 92244 0 0 0 0 0 0 0 0 0 0 0 18 81 26 0 1 99 0 5 0 0 92216 0 10 0 0 0 0 0 11 0 0 0 48 85 46 3 2 95 179 /home/oas/sep/matt/Gmake# iostat 5 tty dk0 dk1 dk2 dk4 cpu tin tout bps tps msps bps tps msps bps tps msps bps tps msps us ni sy id 1 67 16 2 0.0 12 1 0.0 2 0 0.0 11 0 0.0 17 0 28 56 0 16 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0 100 0 16 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0 100 0 16 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 1 0 1 98 0 16 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0 100 0 16 29 4 0.0 0 0 0.0 0 0 0.0 0 0 0.0 3 0 1 96 518 426.08re 3.47cp 494avio 118108k*sec 4 0.63re 0.48cp 20603avio 64114k*sec acctcms 48 168.91re 0.26cp 2390avio 33187k*sec ***other 12 93.72re 0.62cp 0avio 6300k*sec update* 7 8.46re 0.03cp 765avio 4856k*sec automou* 7 0.14re 0.08cp 1965avio 3110k*sec nroff swapped out jobs: bob 23446 0.0 0.0 236 0 p7 D Nov 16 2:48 xwais root 18181 0.0 0.2 76 260 ? D 01:17 0:00 /bin/csh -f /etc/ypfiles/makealiases bob 18319 0.0 0.4 148 544 pa D 01:47 0:00 /usr/local/bin/perl /r4/bob/bin/all/wspur2 root 18337 0.0 0.2 72 264 ? D 01:50 0:00 awk -f /usr/local/src/our/util/autonice.oas.awk root 18338 0.0 0.0 52 0 ? D 01:50 0:00 /bin/csh -f /etc/autonice.oas.sh root 0 0.0 0.0 0 0 ? D Nov 15 3:04 swapper root 2 0.0 0.0 0 0 ? D Nov 15 0:07 pagedaemon 161 /home/oas/sep/matt/Gmake# vmstat -s 5 1358229 swap ins 1 swap outs 1075260 pages swapped in 1075838 pages swapped out 24654202 total address trans. faults taken 1256949 page ins 103478 page outs 2867081 pages paged in 1100611 pages paged out 0 sequential process pages freed 7871476 total reclaims (0% fast) 7869784 reclaims from free list 51010 intransit blocking page faults 0 zero fill pages created 2570818 zero fill page faults 0 executable fill pages created 0 executable fill page faults 2119 swap text pages found in free list 7867665 inode text pages found in free list 0 file fill pages created 0 file fill page faults 1345014 pages examined by the clock daemon 35 revolutions of the clock hand 2160117 pages freed by the clock daemon 71671522 cpu context switches 104639097 device interrupts 29119261 traps -1267382766 system calls 36799782 total name lookups (cache hits -54% per-process) toolong 2337525 procs memory page disk faults cpu r b w avm fre re at pi po fr de sr d0 d1 d2 d3 in sy cs us sy id 0 7 0 0 88420 0 20 7 0 0 0 3 2 1 0 0 169 7789 184 17 28 56 0 7 0 0 88376 0 0 0 0 0 0 0 0 0 0 0 46 133 63 1 1 99 0 7 0 0 88368 0 0 0 0 0 0 0 0 0 0 0 27 84 41 0 0 100 0 6 0 0 88392 0 0 0 0 0 0 0 2 0 0 0 32 84 46 0 2 97 0 6 0 0 88380 0 4 0 0 0 0 0 6 0 0 0 53 127 52 2 2 96 0 6 0 0 88360 0 7 0 0 0 0 0 2 0 0 0 41 85 39 3 1 96 0 6 0 0 88360 0 1 0 0 0 0 0 0 0 0 0 22 75 35 0 0 100 160 /home/oas/sep/matt/Gmake# uptime 1:45am up 4 days, 12 hrs, 31 users, load average: 10.43, 11.03, 10.46 158 /home/oas/sep/matt/Gmake# pstat -T 543/1888 files 391/1018 inodes 175/522 processes 98604/655272 swap I spend some time this weekend looking into the oas tuning problems. I have some ideas on how to persue the problem. Here are my comments whatever they are worth it. At the end of the file is an Appendix which is a bit like my protocol of things I tested this weekend. Some of you may find it useful to look at some of the commands and some of the data. Most of my wisdom is from "system performance tuning" book by Mike Loukidas (O'Reilly). What we could do: o some adhoc fixes (which do not interfere with the user's business) for some guessed problems. o continue to run our applications and anlyse what goes wrong when it goes wrong (Alternatively, we could copy some scripts who test different aspects of our system. I have the source code.) o solve the remaining bottlenecks (maybe by contracting outside help). Some adhoc fixes: o oas is configured with MAXUSERS: 32 according to my back of the envelope calculations it should be MAXUSERS: 54 (USERS does not only count the number of users on the network). We often are (actually right now) 32 real users. We should reconfigure the kernel. This will make the kernel bigger (but that should not matter much considering we have a 160MBytes machine) by adding more tables (especially for inodes). Today during medium usage: 350 oas> pstat -T 641/1888 files 906/1018 inodes 187/522 processes 77372/327596 swap o No filesystem should have more than 90-95% load. Otherwise the fragmentation makes disk seeks longer: /dev/sd5a 1688131 1519326 0 100% /home/oas/promax1 /dev/sd5d 1688131 1511671 7647 99% /home/oas/promax2 /dev/sd5e 1688131 1471445 47873 97% /home/oas/promax3 /dev/sd2d 1688131 1426013 93305 94% /home/oas/biondo /dev/sd2f 1688131 1382007 137311 91% /home/oas/nizar Do we need a crontab entry? o sendmail is a memory hog. It should be moved to a different machine e.g. taal (this should be entirely transparent to the users). A couple of other things we could do about mail short of moving it: do not run the comsat daemon ("you have new mail") and do not run xbiff at the same time (take out of default). Nice sendmail. set the queque checking to 3 hrs (-q 3h option). o users should avoid running oas xload windows except system users. (I do not like this since the users should be left to do what ever they want .. but xloads are heavy on memory and cpu and do not deliver much for a standard user) o emacs and netscape should be compiled sharing libaries (which will cut down about half there size when multiple copies are running). BTW where is the netscape documentation? There are no man pages. emacs is known as a memory hog: but its actual residential size is hardly ever more than 1MB. (Comparable with a single xload window on oas). That is not much on a 160 MByte machine. In our summary listings of machine usage ("sa") emacs is not listed: its effect seems to be negligable. 427 oas> size /usr/local/bin/emacs text data bss dec hex 1441792 614400 0 2056192 1f6000 428 oas> size /usr/local/bin/X11/netscape text data bss dec hex 3923968 376832 26888 4327688 420908 o daemons which we may not need: accounting: acctcms comsat: "you have new mail" routed: "only default route" if our gateway is reliable we could set it to gateway. (maybe not such a good idea?) sendmail: nice to low priority (high memory) o alias pwd to dirs o following programs used quite a bit of cumulative cpu. what are they? domainna rpc.rsta in.comsa o idraw is a BIG memory hog. Is there any reason why we could not run it from a different machine but oas (taal, zarand). It actually shows up in the "sa" listing. And I see no reason to run it on oas. (I would again like to avoid restrictions ...). o Are there any diskless terminals hanging of oas? if yes: maybe better else? do they swap and page? they are using the network then. CPU bound: I find that very hard to believe but I observed it at least once today: when oas was under medium load: 365 oas> iostat 5 tty dk0 dk1 dk2 dk4 cpu tin tout bps tps msps bps tps msps bps tps msps bps tps msps us ni sy id 1 899 14 2 0.0 11 1 0.0 8 1 0.0 1 0 0.0 7 0 9 84 0 136 1 1 0.0 0 0 0.0 0 0 0.0 0 0 0.0 62 0 38 0 0 69 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 64 0 36 0 0 60 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 67 0 33 0 0 89 50 7 0.0 0 0 0.0 0 0 0.0 0 0 0.0 59 0 41 0 3 80 11 2 0.0 0 0 0.0 0 0 0.0 0 0 0.0 52 0 48 0 no idle time what so ever .... hmmmm ... that may indicate that the system is having some problems especially since the sy column is big but not outrageous. I think we would need more data. Disk I had the impression that disk IO was okay (but maybe too little data): there was not an even distribution of disk activity though: sys dk0 only (that is okay if not causing problems). 365 oas> iostat 5 tty dk0 dk1 dk2 dk4 cpu tin tout bps tps msps bps tps msps bps tps msps bps tps msps us ni sy id 1 899 14 2 0.0 11 1 0.0 8 1 0.0 1 0 0.0 7 0 9 84 0 136 1 1 0.0 0 0 0.0 0 0 0.0 0 0 0.0 62 0 38 0 0 69 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 64 0 36 0 0 60 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 67 0 33 0 0 89 50 7 0.0 0 0 0.0 0 0 0.0 0 0 0.0 59 0 41 0 Memory: look at sa commands in appendix. Netscape is a big sinner, emacs does not even make the top 10 *smiles*. Run "vmstat -S 5" and gather data. NFS: Here are some remarks about NFS: I find very hard to measure NFS performance conclusively. Again it probably would be interesting to observe oas under load. However there are some indications that we may have a problem: I am not sure why and how but our system (especially oas) is not doing well in NFS related things: spur can send packets in a rate that oas cannot match: #/usr/sbin/spray oas sending 1162 packets of lnth 86 to oas ... in 10.1 seconds elapsed time, 1112 packets (95.70%) dropped by oas Sent: 114 packets/sec, 9.6K bytes/sec Rcvd: 4 packets/sec, 423 bytes/sec 96% dropped !!! a data transfer of 10K bytes per seconds !!! It seems that this is not because the network (cddi?) is corrupting the packets (netstat -s does not list any ierrs or oerrs it seems ...). But oas is just very slow in comparison. My book says a drop rate of 10% (at this package size) can be tolerated. Our entire system is not great: 660 oas> /usr/etc/spray taal sending 1162 packets of lnth 86 to taal ... in 0.4 seconds elapsed time, 460 packets (39.59%) dropped Sent: 3220 packets/sec, 270.5K bytes/sec Rcvd: 1945 packets/sec, 163.4K bytes/sec However if you send bigger packets, our system does okay: 516 spur> /usr/sbin/spray oas -l 1000 sending 99 packets of lnth 1002 to oas ... in 0.1 seconds elapsed time, no packets dropped by oas 1119 packets/sec, 1095.9K bytes/sec I still would like to test Martin's example .... but I guess I am running out of time .... Conclusion: I suggest: - we reconfigure the kernel - we do change oas load in ways which are transparent to the user - we use shared libraries for identical programs that run concurrently - we write some test and monitoring scripts - we show the data to CAST, Steve Cole, Dave Nichols, Stew Levin, Phil Farell - we find outstide professional help Matt Tuning: oas 5:45: we just experienced a peaking of oas to a workload of 6. I wonder what the reason is ... I took some measurements. The peak continued for a few mins and then vanished as quickly as it came. 581 oas> uptime 5:27pm up 7 days, 8:06, 28 users, load average: 6.16, 2.69, 1.20 580 oas> iostat 5 tty dk0 dk1 dk2 dk4 cpu tin tout bps tps msps bps tps msps bps tps msps bps tps msps us ni sy id 1 43 13 2 0.0 15 1 0.0 6 0 0.0 11 0 0.0 5 1 6 88 3 19 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 4 0 2 95 4 20 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 1 0 1 98 4 20 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 0 0 3 97 3 20 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 4 0 3 93 2 18 63 9 0.0 0 0 0.0 7 1 0.0 0 0 0.0 4 0 2 93 0 16 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 1 0 0 99 0 16 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 1 0 4 95 0 16 0 0 0.0 0 0 0.0 0 0 0.0 0 0 0.0 1 0 2 97 0 40 15 3 0.0 0 0 0.0 0 0 0.0 0 0 0.0 2 0 2 97 USER PID %CPU %MEM SZ RSS TT STAT START TIME COMMAND root 0 0.0 0.0 0 0 ? D Nov 23 3:00 swapper root 1 0.0 0.0 52 0 ? IW Nov 23 0:03 /sbin/init - root 2 0.0 0.0 0 0 ? D Nov 23 0:09 pagedaemon root 154 0.0 0.0 244 0 ? IW Nov 23 0:04 sendmail: accepting connections bin 124 0.0 0.0 36 0 ? IW Nov 23 0:02 ypbind root 115 0.0 0.1 68 96 ? S Nov 23 0:22 portmap root 122 0.0 0.0 44 0 ? IW Nov 23 0:00 /usr/etc/rpc.yppasswdd /etc/ypfiles/yppasswd -m passwd sergey 15589 0.0 0.2 148 380 r9 D 15:48 0:00 -csh (csh) root 118 0.0 0.1 152 188 ? S Nov 23 3:50 ypserv root 120 0.0 0.0 56 0 ? IW Nov 23 0:03 ypxfrd root 126 0.0 0.0 40 0 ? IW Nov 23 0:00 keyserv root 129 0.0 0.0 40 0 ? IW Nov 23 0:00 rpc.ypupdated root 137 0.0 0.0 40 64 ? S Nov 23 0:10 in.routed root 140 0.0 0.0 16 0 ? I Nov 23 0:37 (biod) root 141 0.0 0.0 16 0 ? I Nov 23 0:37 (biod) root 142 0.0 0.0 16 0 ? I Nov 23 0:37 (biod) root 143 0.0 0.0 16 0 ? I Nov 23 0:37 (biod) root 146 0.0 0.0 72 0 ? IW Nov 23 2:47 syslogd root 194 0.0 0.0 480 0 ? IW Nov 23 13:08 automount root 174 0.0 0.0 48 0 ? IW Nov 23 0:01 rarpd -a root 214 0.0 0.0 92 0 ? IW Nov 23 0:06 /etc/opt/licenses/lmgrd.ste -c /etc/opt/licenses/licenses_combined root 200 0.0 0.2 56 256 ? S < Nov 23 0:03 /usr/local/bin/ntpd -t root 166 0.0 0.0 28 0 ? S Nov 23 11:16 (nfsd) root 164 0.0 0.0 28 0 ? S Nov 23 11:27 (nfsd) root 165 0.0 0.1 92 228 ? S Nov 23 0:58 rpc.mountd -n root 167 0.0 0.0 28 0 ? S Nov 23 11:37 (nfsd) root 168 0.0 0.0 28 0 ? S Nov 23 11:11 (nfsd) root 169 0.0 0.0 28 0 ? S Nov 23 11:17 (nfsd) root 170 0.0 0.0 28 0 ? S Nov 23 11:35 (nfsd) root 171 0.0 0.0 28 0 ? S Nov 23 11:26 (nfsd) root 172 0.0 0.0 28 0 ? S Nov 23 11:37 (nfsd) root 187 0.0 0.0 16 0 ? IW Nov 23 0:00 /bin/screenblank -d 1800root 175 0.0 0.0 24 0 ? IW Nov 23 0:00 rarpd -a root 176 0.0 0.0 36 0 ? IW Nov 23 0:00 rarpd -a root 177 0.0 0.0 24 0 ? IW Nov 23 0:00 rarpd -a root 179 0.0 0.0 52 0 ? IW Nov 23 0:00 rpc.bootparamd root 219 0.0 0.0 112 0 ? IW Nov 23 0:06 suntechd -T oas 4 -c /etc/opt/licenses/licenses_combined root 182 0.0 0.0 68 0 ? IW Nov 23 0:00 rpc.statd root 195 0.0 0.0 372 0 ? IW Nov 23 0:02 /usr/local/bin/X11/fs -config /usr/local/etc/fs.config root 185 0.0 0.0 144 0 ? IW Nov 23 0:00 rpc.lockd root 203 0.0 0.1 108 168 ? S Nov 23 1:07 /usr/local/bin/aarpd leq0 ES-Ethernet root 205 0.0 0.1 88 224 ? S Nov 23 17:41 /usr/local/bin/atis root 208 0.0 0.0 80 0 ? IW Nov 23 0:02 /usr/local/bin/snitch -S -f SUN 4 SunOS 4.0 UNIX -l lwsrv root 211 0.0 0.2 156 292 ? S Nov 23 0:02 /usr/local/bin/aufs -U 20 -V /usr/local/lib/cap/afpvols -l /dev/null -n oas root 213 0.0 0.1 96 200 ? S Nov 23 0:00 (lwsrv) root 216 0.0 0.0 96 0 ? IW Nov 23 0:00 /usr/etc/snmpd.cfddi -c /etc/snmpd.cfddi.conf root 220 0.0 0.0 136 0 ? IW Nov 23 0:11 licsrv -T oas 4 -c /etc/opt/licenses/licenses_combined root 223 0.0 0.0 12 8 ? I Nov 23 94:24 update root 226 0.0 0.0 128 0 ? IW Nov 23 0:24 cron root 231 0.0 0.0 52 36 ? S Nov 23 0:45 inetd mihai 14029 0.0 0.1 200 140 co I 15:05 0:00 xclock root 1431 0.0 0.0 28 0 ? IW Nov 29 0:17 in.rlogind matt 13017 0.0 0.0 76 0 ? IW Nov 28 0:02 /usr/local/lib/emacs/19.29/sparc-sun-sunos4.1/wakeup 60 root 28377 0.0 0.0 28 0 ? IW Nov 27 0:29 in.rlogind nizar 29430 0.0 0.1 200 96 p7 S Nov 27 0:10 xbiff mihai 9723 0.0 0.0 132 0 co IW Nov 29 0:00 -csh (csh) biondo 18769 0.0 0.0 216 0 p5 DW Nov 24 0:11 -csh (csh) root 11162 0.0 0.0 60 0 ? IW Nov 24 0:10 rpc.rquotad bob 7330 0.0 0.0 176 0 p4 IW Nov 28 0:01 -csh (csh) bob 7330 0.0 0.0 176 0 p4 IW Nov 28 0:01 -csh (csh) nizar 29053 0.0 0.2 176 268 q4 D Nov 27 0:04 -csh (csh) mihai 13989 0.0 0.0 80 0 co IW 15:04 0:00 /bin/csh -f /usr/local/lib/share/setup/windows/xinitrc root 24968 0.0 0.0 28 0 ? IW Nov 29 0:13 in.rlogind bob 24033 0.0 0.0 48 0 q7 IW 01:14 0:00 /usr/ucb/rlogin oas -l christin christin 24080 0.0 0.0 172 0 p9 IW Nov 29 0:02 -csh (csh) biondo 18826 0.0 0.1 212 96 p5 S Nov 24 0:17 xbiff biondo 17599 0.0 0.0 216 0 p5 D 17:26 0:00 -csh (csh) bob 18889 0.0 0.0 168 0 qc IW 20:05 0:00 -csh (csh) root 8368 0.0 0.0 28 0 ? IW 11:29 0:00 in.rlogind christin 23940 0.0 0.1 200 104 ? S Nov 26 0:09 xbiff christin 16382 0.0 0.0 728 0 p9 IW 17:36 0:02 idraw sean 10300 0.0 0.0 160 0 pe IW Nov 27 0:01 -csh (csh) sergey 14306 0.0 0.1 200 92 ? I 15:08 0:00 xbiff yalei 17281 0.0 0.1 148 224 qd D 17:10 0:00 -csh (csh) mihai 14113 0.0 0.0 232 0 co IW 15:05 0:00 /usr/openwin/bin/xview/cmdtool -Wp 0 0 -Ws 590 77 -C biondo 21418 0.0 0.1 4852 140 p5 S Nov 24 0:38 /advance/sys/exe/frame/bin/bin.sun4/viewer +viewerIsServer -geometry -0+0 yalei 17608 0.0 0.2 36 296 qe D 17:27 0:00 login -r rainier.Stanford.EDU matt 24688 0.0 0.2 180 264 p0 S Nov 29 0:03 -csh (csh) nizar 28378 0.0 0.0 168 0 p7 IW Nov 27 0:03 -csh (csh) root 4379 0.0 0.0 28 0 ? IW 08:34 0:00 in.rlogind christin 24969 0.0 0.0 200 0 pa IW Nov 29 0:04 -csh (csh) arnaud 8863 0.0 0.1 184 100 pd S Nov 28 0:07 xbiff -geometry 100x100 -bg gainsboro -fg black root 17601 0.0 0.0 28 0 ? IW 17:26 0:00 in.rlogind nizar 17602 0.0 0.2 36 296 q2 D 17:26 0:00 login -r baker.Stanford.EDU bob 7374 0.0 0.0 228 0 p6 IW Nov 28 0:03 -csh (csh) mihai 14040 0.0 0.0 248 0 co IW 15:05 0:00 vkbd -nopopup curt 15472 0.0 0.0 144 0 pc IW Nov 29 0:00 -csh (csh) matt 13016 0.0 0.0 2292 0 ? IW Nov 28 1:55 emacsd root 5062 0.0 0.0 28 0 ? IW Nov 29 0:00 in.rlogind bob 7458 0.0 0.5 208 720 p4 S Nov 28 0:00 xload root 18768 0.0 0.0 28 0 ? IW Nov 24 0:27 in.rlogind root 8814 0.0 0.0 28 0 ? IW Nov 28 0:09 in.rlogind root 17177 0.0 0.0 28 0 ? IW Nov 28 0:17 in.rlogind bob 7457 0.0 0.1 200 100 p4 S Nov 28 0:07 xbiff root 18888 0.0 0.0 36 0 ? IW 20:05 0:00 in.telnetd matt 5063 0.0 0.0 140 0 pf IW Nov 29 0:00 -csh (csh) root 15588 0.0 0.0 28 0 ? IW 15:48 0:00 in.rlogind arnaud 8865 0.0 0.3 196 452 pd S Nov 28 0:03 xdaliclock -24 -nosecond -cycle -geometry 300x100 mihai 14188 0.0 0.0 48 0 r6 IW 15:05 0:00 rlogin antares root 14032 0.0 0.0 404 0 co IW 15:05 0:00 /usr/local/bin/X11/xterm -T HERE -name HERE -e /usr/local/lib/share/setup/exec/xrlogin oas yalei 4380 0.0 0.0 132 0 q9 IW 08:34 0:00 -csh (csh) mihai 13977 0.0 0.0 32 0 co IW 15:04 0:00 /bin/sh /usr/openwin/bin/openwin bob 14247 0.0 0.0 176 0 r1 IW Nov 28 0:00 -csh (csh) root 7232 0.0 0.0 52 0 ? IW Nov 25 0:06 /usr/lib/lpd root 15342 0.0 0.0 28 0 ? IW Nov 29 0:01 in.rlogind yalei 7259 0.0 0.1 156 168 q5 D Nov 29 0:00 -csh (csh) mihai 14187 0.0 0.0 48 0 r6 IW 15:05 0:00 rlogin antares matt 24741 0.0 0.0 76 0 ? IW Nov 29 0:01 /usr/local/lib/emacs/19.29/sparc-sun-sunos4.1/wakeup 60 root 17607 0.0 0.0 28 0 ? IW 17:27 0:00 in.rlogind root 7373 0.0 0.0 28 0 ? IW Nov 28 0:09 in.rlogind christin 29657 0.0 0.0 3776 0 ? IW Nov 27 0:11 /advance/sys/exe/frame/bin/bin.sun4/viewer +viewerIsServer -geometry -0+0 arnaud 8815 0.0 0.0 176 0 pd IW Nov 28 0:03 -csh (csh) mihai 14053 0.0 0.0 136 0 r4 IW 15:05 0:00 -csh (csh) mihai 14114 0.0 0.0 232 0 co IW 15:05 0:00 olwm root 14051 0.0 0.0 28 0 ? IW 15:05 0:00 in.rlogind root 24079 0.0 0.0 28 0 ? IW Nov 29 0:08 in.rlogind mihai 14186 0.0 0.0 320 0 co IW 15:05 0:00 /usr/openwin/bin/xterm -name ANTARES -e rsh antares christin 24034 0.0 0.0 136 0 qb IW 01:14 0:00 -csh (csh) root 10299 0.0 0.0 28 0 ? IW Nov 27 0:04 in.rlogind bob 14848 0.0 0.0 392 0 p4 IW 15:22 0:01 xtex ../Dvi/FGDP.dvi arnaud 11161 0.0 0.0 152 0 pb IW Nov 28 0:00 -csh (csh) jun 8369 0.0 0.0 124 0 q8 IW 11:29 0:00 -csh (csh) mihai 14030 0.0 0.7 204 1120 co S 15:05 0:00 xload diane 15343 0.0 0.0 100 0 p3 IW Nov 29 0:00 -csh (csh) arnaud 8861 0.0 0.5 208 716 pd S Nov 28 0:00 xload root 23963 0.0 0.0 28 0 ? IW 01:13 0:00 in.rlogind mihai 14116 0.0 0.0 128 0 r5 IW 15:05 0:00 -bin/csh (csh) christin 13854 0.0 0.0 48 0 pa IW 14:52 0:00 /usr/ucb/rlogin pangea -l ecker christin 13841 0.0 0.0 48 0 pa IW 14:52 0:00 /usr/ucb/rlogin pangea -l ecker mihai 14115 0.0 0.0 212 0 co IW 15:05 0:00 olwmslave root 15471 0.0 0.0 28 0 ? IW Nov 29 0:01 in.rlogind bob 23964 0.0 0.0 176 0 q7 IW 01:13 0:00 -csh (csh) root 14372 0.0 0.0 28 0 ? IW Nov 29 0:01 in.rlogind bob 24019 0.0 0.0 48 0 q7 IW 01:14 0:00 /usr/ucb/rlogin oas -l christin root 11160 0.0 0.0 28 0 ? IW Nov 28 0:01 in.rlogind jon 1432 0.0 0.0 112 0 qf IW Nov 29 0:00 -csh (csh) root 24687 0.0 0.0 28 24 ? S Nov 29 0:05 in.rlogind bob 1215 0.0 0.0 124 0 q6 IW Nov 29 0:00 -csh (csh) matt 24740 0.0 1.5 2992 2280 p0 D Nov 29 3:22 emacsd bob 14899 0.0 0.0 508 0 p4 IW 15:25 0:00 gs -dQUIET -dNOPAUSE -I/usr/local/lib/tex/ps - mihai 14052 0.0 0.0 48 0 r3 IW 15:05 0:00 /usr/ucb/rlogin oas mihai 10614 0.0 0.5 208 700 ? S Nov 28 0:10 xload mihai 14043 0.0 0.0 48 0 r3 IW 15:05 0:00 /usr/ucb/rlogin oas root 7329 0.0 0.0 28 0 ? IW Nov 28 0:02 in.rlogind arnaud 17564 0.0 0.0 48 28 qa S 17:20 0:00 rlogin elaine-best mihai 13982 0.0 2.4 1684 3724 co S 15:04 0:12 /usr/openwin/bin/xnews :0 -auth /homes/sep/mihai/.xnews.oas:0 root 24032 0.0 0.0 28 0 ? IW 01:14 0:00 in.rlogind jun 17178 0.0 0.0 168 0 p8 IW Nov 28 0:05 -csh (csh) root 14246 0.0 0.0 28 0 ? IW Nov 28 0:00 in.rlogind matt 17615 0.0 0.2 36 260 p0 S 17:28 0:00 more root 17613 0.0 0.2 48 272 ? S 17:28 0:00 rpc.rstatd root 17562 0.0 0.2 316 352 pd S 17:20 0:01 xterm -name LELAND -e rsh elaine-best james 14107 0.0 0.1 200 92 ? S Nov 28 0:10 xbiff arnaud 17563 0.0 0.0 48 24 qa S 17:20 0:00 rlogin elaine-best mihai 13981 0.0 0.0 44 0 co IW 15:04 0:00 /usr/openwin/bin/xinit -- /usr/openwin/bin/xnews :0 -auth /homes/sep/mihai/.xnews.oas:0 bob 14373 0.0 0.0 124 0 q0 IW Nov 29 0:00 -csh (csh) christin 16295 0.0 0.0 632 0 p9 IW 17:31 0:00 gs -sDEVICE=x11 -dNOPAUSE -dQUIET -dSAFER - matt 17614 0.0 0.4 356 620 p0 R 17:28 0:00 ps -auxwww jon 1487 0.0 0.0 236 0 qf IW Nov 29 0:06 bash -i mihai 14041 0.0 0.0 36 0 co IW 15:05 0:00 dsdm jun 16722 0.0 0.0 36 0 ? IW Nov 28 0:00 selection_svc jon 1716 0.0 0.1 200 88 qf S Nov 29 0:04 xbiff root 1214 0.0 0.0 28 0 ? IW Nov 29 0:00 in.rlogind