Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Multiple Computer Administration

By ameoba in Technology
Wed Oct 18, 2000 at 09:37:37 AM EST
Tags: Help! (Ask Kuro5hin) (all tags)
Help! (Ask Kuro5hin)

HELP! I've been nominated to design and install a fairly large Linux instalation, and don't really know what I'm doing. I've got an outline of what's do be done, but would like your help (yes, you, in the chair, in front of that computer) in working at as many bugs as possible before I actually install the thing.


I'm a CS student at a very small college, who has recently been nominated as the school Linux Geek. One of my newly gained priveledges is to figure out how to get Linux on all the boxes in the computer lab.

At first, getting Linux onto 25 machines may not seem too daunting, but until about 2 months ago, the most in depth networking I had ever done had been PPP over dialup, and I hadn't run Linux in about a year.

That all changed at the beginning of the term, when on the first day of my Operating Systems class, the prof. announced that we'd be doing our labs in Linux. After class, I took him aside, and told him that we didn't have any machines running it.

I learned a lot over the next 6 days, getting 5 machines up and dual-booting, sharing user info over NIS and NFS, basic mail and Samba to access a Windows Print server. And after 6 days, it was working.

The head of the CS dept. was quite thrilled to see what I had done. So thrilled, that he decided I could get all the machines in the lab dual-booting for next term. The 5 machines I set up had been a rush job, and the system, as it is, is not easily scalable (I for one, don't want to do 25 installs of Redhat).

As I said before, I am the most Linux competant person at school, and while the Network Admin is competant, his experience is limited to MSFT systems, so, I come to you for advice.

While, my primary concern is figuring out how to effectively adminster this many machines, I'm open to any comments you may have.

The network, as it exists now, consists of 25 mostly homogenous workstations (p166&133s 64MB, 3-4GB, all on the same mobos (with integrated video), with the same net cards) running Win98, and 2 NT4 based servers, one of which does user authentication, DHCP and stores user files, the other is a backup domain controller and print server (for a lovely poscript handling color laser). The Win98 machines are all given a fresh install (using Ghost, a multicast disk cloning tool) at the beginning of each semester.

At my disposal, I have a stack of 2GB drives, and an unused box (same as the others) to use as a server.

The server will run:
  • NFS for /home, /var/spool/mail and /usr/local
  • NIS for shared accounts
  • lpd pointing at a Samba interfaced printer (this keeps Samba on one machine, allowing the workstations to be as minimal as possible)
  • Samba to interface w/ the NT print server, and possibly share home directories w/ Win98
  • httpd (Apache?) to allow students to do web development

The workstations are where my present problem lies. I plan on placing Linux on the boot drive, and having lilo boot the Win98 partition on the 2nd drive by default. This should allow me to still use Ghost for cloning the 98 side of things (since it only works with fat/NTFS partitions). The Linux systems will probably be cloned with CluClo or something similar, after I get one 'template' machine tweaked to perfection.

The workstations will be using all the goodies that I'll be putting on the server, and getting their IPs over DHCP from the NT domain controller. Everything is going to be running Redhat 6.2, partially because it's a name the faculty/administration may recognize, and (primarily) because I have a CD.

As I said earlier, my main question is how to do maintenance/administration on 25 machines that very well may not even boot linux for several days at a time. Presently, my best idea is to have them check an NFS shared directory for modification (shell) scripts at boot time, and make a mark in some log directory/file to keep track of what's been done already.

Again, any comments about how things could be done better, potential pitfalls, and whatnot will be gladly accepted and considered.

Security Note: Security will not be a major concern here, as the network is not, nor will it ever be, connected to any external networks, and most of the users are middled aged men taking night classes who just want to get their work done and get home to the kids...

One final question: How much should I try to get paid for this?

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
Favorite Vowel?
o A 7%
o E 14%
o I 4%
o O 5%
o U 6%
o Y 31%
o K5 29%

Votes: 89
Results | Other Polls

Related Links
o CluClo
o Also by ameoba


Display: Sort:
Multiple Computer Administration | 39 comments (38 topical, 1 editorial, 0 hidden)
How much should I try to get paid for this? (3.18 / 11) (#1)
by codemonkey_uk on Wed Oct 18, 2000 at 07:05:48 AM EST

In my opinion? Nothing.

Your doing it to learn, you've admited that you don't really know what your doing, and I hope you've told them the same.

Do you want to be liable if something goes wrong? If you accept money then you will be.

Do it to learn, or not at all.

Thad
---
Thad
"The most savage controversies are those about matters as to which there is no good evidence either way." - Bertrand Russell
Time... (4.14 / 7) (#2)
by Luke Scharf on Wed Oct 18, 2000 at 07:25:26 AM EST

<p.In my opinion? Nothing.

As this thing grows, it will take more and more of his time until he has none left.

Even though Linux machines don't "just get changed", the maintenence after the initial setup always takes time. There's no such thing as an install-and-forget situation. Especially if you spend a lot of time in the lab and get a reputation for being knowledgable.

Of course he should get paid -- he's giving up the opportunity to have a life in order to support the other students and the faculty. As for how much, that's a question of how much he values his time and how much the school values his service. I started at $7.50/hr as a student sysadmin which, all things considered, was a very fair wage. My bosses seem to like my work, so I'm making a lot more now. :-)



[ Parent ]
re: Time... (4.50 / 6) (#3)
by codemonkey_uk on Wed Oct 18, 2000 at 08:04:04 AM EST

Some good points.

I'm not sure if your talking as somone who has been a student sysadmin at a school, collage, or university, but I get the impression that the author of the article is at school.

As such I would strongly advise him/her to manage their time very carefully. Your school years are vital to your social development, and I know its a cliche, and it doesn't feel like it at the time but your youth is the best time in your life. Dont waste it in a lab working for pennies. Theres pleanty of time for the later.

For student sysadmins at collage or uni there is the possability that it'll lead directly into full time work when you finish, but, and I shudder at the thought, who would skip collage/uni to spend the rest of there lives as an admin at a school? Schools don't have the budgets to pay a pro, don't become a slave.

If you want to set this system up your going to have to learn to say No, and keep it in school time and on your own terms.

Nobody likes a workaholic. This will sound childish or patronising, but remember to play with your friends. Enjoy life.

Thad
---
Thad
"The most savage controversies are those about matters as to which there is no good evidence either way." - Bertrand Russell
[ Parent ]
Get real (4.00 / 1) (#30)
by kmself on Fri Oct 20, 2000 at 04:09:00 PM EST

The guy is going to be doing some serious system configuration on a student budget. He's posted his hourly -- $7.50 -- now. The equivalent commercial consulting rate would be about twenty times that.

He's being paid crusts-and-water money to learn a whole lot while trying to provide a useful service. Both sides should be getting something out of this, his end is rather more the experience than the money.

--
Karsten M. Self
SCO -- backgrounder on Caldera/SCO vs IBM
Support the EFF!!
There is no K5 cabal.
[ Parent ]

A few comments... (2.66 / 9) (#4)
by chaotic42 on Wed Oct 18, 2000 at 08:36:25 AM EST

The head of the CS dept. was quite thrilled to see what I had done. So thrilled, that he decided I could get all the machines in the lab dual-booting for next term. The 5 machines I set up had been a rush job, and the system, as it is, is not easily scalable (I for one, don't want to do 25 installs of Redhat).

Don't rush something like that. You'll mess something up, and spend all kinds of time searching for it.

I wouldn't want to do 1 install of RedHat :)

The workstations are where my present problem lies. I plan on placing Linux on the boot drive, and having lilo boot the Win98 partition on the 2nd drive by default.

Use grub. It's easy, it's pretty, and you don't have to do much mucking around.

One final question: How much should I try to get paid for this?

Resign from the position and let someone else do it. I'd _pay_ to have the chance to do something like this. Let someone who wants to learn, not just make a quick buck, do it.

.. (3.00 / 2) (#8)
by ameoba on Wed Oct 18, 2000 at 09:26:02 AM EST

I'd do this, even if I wasn't going to get paid for it, but since I'm paying just under $20k/yr in tuition, and having trouble scraping up the money for a pack of smokes (and no, I'm not going to quit in the near future) I think I deserve something. Not to mention the inevitable ammount of user-support I'll be doing (Carrying 19 credits AND teaching everyone how to use vi would leave me spread pretty thin...)

I guess, if I'm going to be doing that much tech. sup. kinda work, maybe a weekend "intro to Linux" class would be a good idea... spend 2-3hr getting some basic stuff taken care of...

thx. for the idea. =)

[ Parent ]
Is 'Y' a vowel? (1.58 / 12) (#5)
by karjala on Wed Oct 18, 2000 at 08:39:47 AM EST

Are you sure 'Y' is a vowel?

'Y' is sometimes a vowel (1.50 / 6) (#6)
by end0parasite on Wed Oct 18, 2000 at 08:55:41 AM EST

Take, for example, the word "my".

[ Parent ]
And W (1.00 / 3) (#22)
by joeyo on Wed Oct 18, 2000 at 10:38:08 PM EST

Take, for example, the word "word".

No, thats not right....

Try the word "welcome".

Hmm, still not a vowel....

"A, E, I, O, U and sometimes Y and W"


--
"Give me enough variables to work with, and I can probably do away with the notion of human free will." -- demi
[ Parent ]

How about... (1.00 / 1) (#26)
by pb on Fri Oct 20, 2000 at 04:44:18 AM EST

The word "why"?

I'd argue that in this case, the vowel is "Y and W", so "sometimes Y and W" is absolutely correct in this case. :)
---
"See what the drooling, ravening, flesh-eating hordes^W^W^W^WKuro5hin.org readers have to say."
-- pwhysall
[ Parent ]
It's a semivowel (1.00 / 2) (#23)
by SIGFPE on Thu Oct 19, 2000 at 06:15:06 PM EST


SIGFPE
[ Parent ]
Secure Those Boxen! (3.88 / 9) (#7)
by slambo on Wed Oct 18, 2000 at 09:03:39 AM EST

Quote...
Security Note: Security will not be a major concern here, as the network is not, nor will it ever be, connected to any external networks, and most of the users are middled aged men taking night classes who just want to get their work done and get home to the kids...
IMHO, this is not a very safe assumption, nor is it a very secure policy. Security should always be a concern, even if the computer is short a network card or two. When you network two computers, as soon as one of the computers gets onto a bigger network, the other is there too. Since you're new at it, check out these articles: There is an enormous amount of information on securing Linux, these were all in the first page of results on Google when I used "securing Linux" as my search text. It's not that hard, and the extra steps will be worthwhile as soon as the first script kiddie hacks root on one of the boxen (it doesn't always come from external sources). Besides, if one of the boxes is hacked, whose job will it be to restore the original configuration on it?

It takes quite a bit more time and hassle to install/configure/reformat/reinstall/reconfigure/ad infinitum than to install/configure/secure/maintain once.
--
Sean Lamb
"A day without laughter is a day wasted." -- Groucho Marx

Don't forget the local content (4.50 / 4) (#11)
by Inoshiro on Wed Oct 18, 2000 at 10:23:22 AM EST

Look at the stories I've written. All the ones in the security catagory should help you get a firm understand of security, and why it matters. It goes from a very basic introduction, through securing your systems from remote attack, discussion of services, etc. I've yet to add securing local systems for shell use (oh well), but the short answer is OpenBSD ;-) Chances are you can get away with a minimal Slackware setup on each machine with the D series (development stuff like pmake, gmake, egcs, perl, python, glibc2, etc!) as that's what you're targetting. Red Hat means you have to babysit the machines for years as new holes crop up -- which is much less likely with Slackware.



--
[ イノシロ ]
[ Parent ]
Eh? (1.33 / 3) (#12)
by Nickus on Wed Oct 18, 2000 at 12:56:43 PM EST

Why would Slackware have less securityholes than RedHat? They run the same software basically. So they should have the same securityholes. Just because we don't see any errata from Slackware doesn't mean there isn't securityholes.

Due to budget cuts, light at end of tunnel will be out. --Unknown
[ Parent ]
You're wrong. (3.25 / 4) (#15)
by Inoshiro on Wed Oct 18, 2000 at 03:01:23 PM EST

Red Hat uses software that is different form Slackware. They use Vixie's crond, Slackware uses Dillon's crond. Red Hat uses GCC CVS, Slackware uses egcs 1.1.2 (with 2.95.2 gcc in contrib). Red Hat uses XFree 4, Slackware uses XFRee 3.3.6 (4 is in contrib, but won't be added until it's stabler). For a long time, Slackware used libc5 until glibc2 became stable (even though Red Hat used betas).

It's about care an attention to detail. Slackware uses tested, reliable packages. Red Hat uses whatever they feel like using that gets the most features to their users. People who use Slackware tend to stay using Slackware. Red Hat users tend to learn of Debian or Slackare, and move on to them. It's as simple as that.



--
[ イノシロ ]
[ Parent ]
Partly right (2.66 / 3) (#16)
by Nickus on Wed Oct 18, 2000 at 04:01:00 PM EST

Ok, RH7 has some issues but RH6.2 is one of the most stable distributions. And I said most software, ofcourse there will be differences.
One feature that RedHat has but no other distribution has that I know of is kickstart. That makes installing multiple computers extremly easy. To diskcloning needed, just insert a disc and boot the computer and it installs and configures.
Another good thing with RH is that package installation is not interactive, that is rpm doesn't ask questions. I know debian has started to use debconf but if I have understood it right not all packages are using that feature yet. If you want to manage multiple computers you can't allow that. Everything should be completly automatically and I think RedHat is the best distributions for that.


Due to budget cuts, light at end of tunnel will be out. --Unknown
[ Parent ]
Again the uninformed speak out. (4.00 / 1) (#28)
by Inoshiro on Fri Oct 20, 2000 at 01:06:31 PM EST

You really do seem to have a genuine care for this, but don't have the background information. Cloning systems is as simple as dd if= .. :-) If you want to take it to a higher level, try CluClo.

"Another good thing with RH is that package installation is not interactive" -- besides this being a Straw Man (since you need to setup a system once, clone that, and then have DHCP do the rest...), it's also untrue. Slackware's package system on install asks you if you want to use pre-prepared responce files. This give the ncurses based install (good if you don't feel like making X work on your machines ..) a script to follow. At that point the speed is limited by the disk I/O subsystem, since the tarballs uncompress nearly instantly on even non-powerful systsems.

" If you want to manage multiple computers you can't allow that. Everything should be completly automatically and I think RedHat is the best distributions for that." If you want to setup multiple computers, use a responce file. If they are really lots of computers, use disk cloning. ANY OS can do it. If you want the best tool for managing multiple computers in the wild, go get OpenSSH and speak no more :-p



--
[ イノシロ ]
[ Parent ]
Mr.Zip and Mr.Batch are good ;) (3.00 / 4) (#10)
by mikenet on Wed Oct 18, 2000 at 10:06:33 AM EST

There are two free programs called Mr.Batch and Mr.Zip which are used for netbooting. They work with both Linux and Windows, and will send a compressed image of the OS, they first time the machine boots(you can make a nice menu to let the user decide which OS to run) they get sent the image, and it is written to a cache partition. The next time they boot, the image will be read from the cache, so they don't have to wait for it to download. If any changes are made to the image, they will be found before the cached image is booted, and the changes(not the whole thing unless it is really small) will be sent back to the client machine, and saved in the cache. This way, each time a machine boots, the HD is always clean, and users can do whatever they want(even log in as root and screw the whole thing up), but it will be clean as soon as they figure it out and wack the reset button.


BTW, Don't Use NFS!!!, there are many other alternatives which are much more secure, that are availible as patches to the kernel. (Freshmeat is your friend if you need to find some)

Where can we find these placebos? (1.33 / 3) (#13)
by mooshu on Wed Oct 18, 2000 at 01:13:37 PM EST

I want to play around with Mr Batch/Mr Zip, but I was unable to find them.
Where are they (or similar programs)?

Thanks
Moosh

[ Parent ]
And you find these fine tools WHERE? (3.50 / 2) (#18)
by gerblazi on Wed Oct 18, 2000 at 04:28:53 PM EST

Mr.Batch and Mr.Zip are not clearly identified at Freshmeat. Does anyone have a link to where I could find a netboot program called "Mr. Batch"?

[ Parent ]
Sorry, I forgot to post the link (4.00 / 2) (#19)
by mikenet on Wed Oct 18, 2000 at 07:13:58 PM EST

Sorry, I posted while I was hardly awake and had to get ready for school. It has been a while since I have played with it and the main distro is now called BpBatch.

[ Parent ]
What are they using now? (none / 0) (#34)
by MikeApp on Sun Oct 22, 2000 at 12:33:00 AM EST

What are they using to administer their Windows (NT?) machines at the moment? I only ask b/c I have been evaluating Symantec's _Ghost_ for use in preparing Windows PCs that we give to our university faculty, and I noticed Symantec claims to support ext2 as well as FAT & NTFS. There are only two reasons that I can see for using this commercial product:
[1] Cluclo doesn't use Bootp on startup, but chooses "a random IP" [possibly colliding with another machine in the process - not good]
[2] Ghost can, I believe, update the network settings on a Windows machine during the image process - so you can have one image, but define their network names individually. I've only had a few hours to work with this - you should download the trial version if it sounds useful.
It looks like the license costs will be US$10-20 per workstation.
-Mike

[ Parent ]
I see some BIG problems (3.33 / 3) (#14)
by GrEp on Wed Oct 18, 2000 at 02:49:15 PM EST

As one of the administators for Drake University's Sheppard Cluster I have a couple of things we learned the hard way.

1. Distributed Services: If you have more than a couple of boxen use them to serve up different stuff. We have our main server that has a /home directory where all users store their files. They are all in one place so we can back them up three times a week to tape. PREFORM FREQUENT BACKUPS OF USER FILES!!!! Drives WILL fail. Also we have two lesser boxen to serve as print server/time server, and telnet servers respectively. You would probably farm your Apache out to one of these boxen also.

2. Don't run a mail server!! Big headache, and big security problems.

3. Have seperate root passwords for admin and client boxen.

4.Why dual boot? 90% of the work I do as CS student is in Linux. I would suggest not making all of your boxen dual boot, and just put Linux on them. You will save thousands of dollars in software costs, and of admin headaches.

As far as pay we get about $50 a piece per week for the three of us. Although, you can get a load of independant study credits for playing around ;) I hope that makes your life a little easier.

-Brew

Backups (4.00 / 2) (#17)
by Nickus on Wed Oct 18, 2000 at 04:04:31 PM EST

You shouldn't perform frequent backups, you should do daily backups. Every night so people will never loose more than one day of work. And test your tapes regularly so that you really can restore in case of a failure. People tend to forget that part.
If you want to run a mailserver then choose qmail. It is a lot easier to administer than sendmail.


Due to budget cuts, light at end of tunnel will be out. --Unknown
[ Parent ]
Simple, Awesome Solution (3.00 / 4) (#20)
by end0parasite on Wed Oct 18, 2000 at 09:49:17 PM EST

Walk the students through installing it themselves! Then they'd really be learning Linux.

ks or systemimager (2.66 / 3) (#21)
by rob latham on Wed Oct 18, 2000 at 10:13:29 PM EST

RedHat has kickstart: and if you can get past the incorrect documentation and syntax problems (it chokes on the slightest mistake) you can bang out 25 floppies and have Rhat on 'em all in no time.

<A HREF="http://www.systemimager.org">systemimager might be a good choice, especially since you aren't installing on laptops (been fighting with pcmcia and system imager the last two days). Get one install the way you like it, rsync it to an image server, make the autoinstall disks, and all the computers will fdsik and do an rsync pull from the image server.

==rob

The Graduate: "I have one word for you" (4.00 / 1) (#24)
by kmself on Fri Oct 20, 2000 at 12:18:27 AM EST

Diskless [1].

If you're going to be setting up (and maintaining) 25 systems, possibly growing in number, diskless is a way to minimize the administrative and security headaches associated. Each of your client systems becomes an identical clone. You reduce the need to protect state at the client box -- tape and power supply backups are centralized. Updating the system becomes a centralized operation. You can readily provide increasing amounts of redundancy by increasing a (low count) of central servers rather than a (high count) of distributed clients. The solution scales readily and to high values.

There are several different modes of "diskless" operation, from true diskless NCs to X terminal servers to centrally "pushed" clients which do maintain significant local state.

The HOWTO (linked above) describes a Red Hat setup, there was a recent FreeOS article describing setup of a Debian-based diskless system.

You might alwo want to look at LinuxToday search results.

[1] And for all you young wipersnappers out there, the Graduate was a 1968 movie starring Dustin Hoffman as a recently graduated college student making some interesting life choices, starting with some infamous advice.

--
Karsten M. Self
SCO -- backgrounder on Caldera/SCO vs IBM
Support the EFF!!
There is no K5 cabal.

.. (4.00 / 1) (#25)
by ameoba on Fri Oct 20, 2000 at 04:33:40 AM EST

How much of a performance hit would there be in running the OS off the network? At least in part, I'm involved with this project to help expose ppl to Linux, and have them see it in a good light, and I think that '98 running locally would have a definate performance advantage over a remote Linux install once you had more than a handfull of ppl on the network.

If the machines go diskless, it'll be all the way. Having to partition the drive for swap is virtually the same ammount of effort as placing a whole system on. As for keeping any significant local info, it's the basic local config info that's going to be the tricky part to keep up to date.

The thought of XTerms crossed my mind, but the project has virtually no budget, forcing me to work with available hardware, and the fastest thing I have available is the p200 w/ 64meg, how many clients would I actually be able to efficiently run off this thing (keeping in mind that ppl WILL be doing Java work (which often includes a browser looking at the API docs...).



[ Parent ]
Performance (4.00 / 1) (#31)
by kmself on Fri Oct 20, 2000 at 04:12:37 PM EST

Not quite sure. While I've wanted to set up diskless servers, I've never actually done it <g>.

The answer probably depends on how things are configured. What I would really like to see in a diskless configuration is something which manages processing locally, recieves (and stores) state over the network, but does significant caching of content on local disk. The hazard otherwise with network-based systems is that there is a significant startup latency -- both for the system and applications -- exacerbated by other network activity. The worst thing to do is to have a bunch of people in a lab firing up their systems at the same time, and all launching the same (large) applications simultaneously. Bandwidth is limited.

What you're aiming for is to keep both security and integrity issues to a minimum. With the full-workstation mode of installation, you risk people walking in with their own boot media and messing with the system. While it's possible to do this on a diskless system, your recovery mode is far, far easier -- a floppy disk and a couple of minutes, not a set of installation CDs and an hour.

I'll disagree strongly that formatting and configuring swap space is equivalent to a full installation. First off, that's grossly inaccurate. Secondly, it completely ignores ongoing maintenance issues. A default disk configuration could probably be scripted (though I don't know off-hand of tools that allow this -- anyone got a pointer on this?). Even a kickstart installation is going to take significantly more time.

As I said, there is a range of possible configurations. I've talked to friends at VA Linux, what they do is use a "push" mechanism for updating system, essentially you've got fat workstations, but the administration of them is centralized. This balances off some of the issues between performance and ease of administration for fat and thin clients. I would suggest you study the area and look for pointers, possibly some assistance, in setting up what works best for you.

WRT X terminals -- this isn't a case of buying specific hardware, it's a matter of configuring computers to act as X terminals. Your clients would be more than sufficient -- so it should fit into your budget requirements. It's the server I'm concerned about -- the downside is that all processing, other than the X display itself, resides on the host system. This would probably be too much of a hit for the server hardware you're mentioning.

--
Karsten M. Self
SCO -- backgrounder on Caldera/SCO vs IBM
Support the EFF!!
There is no K5 cabal.
[ Parent ]

A Little Variation (4.00 / 1) (#32)
by scheme on Sat Oct 21, 2000 at 01:16:45 PM EST

Instead of going fully diskless, I think ameoba could use nfs to share /etc to make sure the configurations are the same across all the machines and have the system and libraries on the local disk. This will probably give good performance while making maintainence easier. He could also probably use rdist[1] to update the local files every night or so.

If you're reading this ameoba, I think you should probably ask for 7-12 an hour.

[1] Caveat: I haven't used rdist or really played around with it so I'm not sure if it'll do what you need. Also there may be problems when updating system libraries. You may need to reinstall systems if they crash while rdist is updating system libraries.


"Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT'S relativity." --Albert Einstein


[ Parent ]
Updates/Clarifications (5.00 / 1) (#27)
by ameoba on Fri Oct 20, 2000 at 05:27:56 AM EST

First off, I'd like to thank all of you for your time and ideas. There have been some interesting ideas brought up that I definately plan of working into my set up.

Secondly, I'll clear up some things:
  • Hardware Availability : There are only 3 labs on campus; one for computer graphics, one for engineering, and one general purpose/CS lab (plus a few in the library). Even with just these 3 labs, the Student/Computer ratio is better than 5:1. As such, I can't justify the space/expense of another lab just for linux, so I am forced to dual-boot the machines if this is to be done at all. On the same note, anything that requires more servers is right out. (The one I have is surplus from the library, since the librarian decided she needed a P3 to handle the card-catalog DB).
  • Money : It looks like I'll be getting "$7.50/hr student sysadmin" position under work-study. Probably 5 to 10 hours per week, unless the current network guy convinces them that they should give me more hours so I can help him w/ the rest of the computers on campus.
  • Network Security : You guys have brought up some intersting points, and I'll be sure to clean things up a bit. Still, on the whole, the primary purpose of security here will be to keep users' files separate. The lab is not, nor will it ever be connected to an external network; hell, it's not connected to anything outside the room. So, what can I reasonably do to stop ppl from running packet sniffers and/or spoofing NFS? (or am I seriously missing something that could go on here?)

    This is the kind of site where you can safely have a guest account, and not use a password. Really.


Additional musings re: security (5.00 / 2) (#29)
by slambo on Fri Oct 20, 2000 at 03:22:16 PM EST

Generally, there are three types of policies when it comes to security:
  • Deny everything that isn't specifically granted.
  • Grant everything that isn't specifically denied.
  • Grant everything.
I tend to fall into the first of these three categories as a sysadmin, myself. There are those who would call this type of policy one for control freaks, and I would partially agree. However, with a "deny unless explicitly granted" policy in place, you know that your users won't be able to do something stupid like rm * -rf from your / partition (which is even more fun if you've got partitions from other machines on the network mounted locally with write access). With this policy toward security, you can be a little more certain that users aren't getting to anything that they shouldn't have access to.

The second category assumes that the user can be relied on to "play nice" on the system. Most users can be trusted this way. Indeed, many of the people logging in wouldn't know the difference between /proc and /etc, and they don't have to because it's not their job to know (it's your job to keep them straight). However, ignorance is by no means bliss for a sysadmin. I've taken enough tech support calls helping the user rebuild a system after he/she played the Delete Files game to know this. With the second of the three policies listed above, you can explicitly deny access to certain parts of the filesystem, like /etc, /dev and /proc, but you still can't be entirely certain that your users will be behaving properly in the rest of the filesystem, not to mention all the programs and services that setuid root to run.

Finally, the third policy category, while the most permissive, can easily lead to the highest number of reformat/reinstall cycles. Yeah, the default install on any *NIX system will lock out some things that the user shouldn't be able to access, but I consider this policy equal to giving everyone root access. If everyone is supposed to have root access in the first place, then it's not as much of a problem, because you're basically planning for the reinstall from the outset.

The big question is "what is the appropriate risk level?" For this project it seems that the default install plus a few safeguards would be enough for now. You could start with a "grant all that isn't explicitly denied" policy, adding additional measures as you learn and need them.

I'm tempted to say that you will likely lean more toward the first option before the school year is out, but not having met you or your users, I can't say that for certain. What I can suggest is that you make a concerted effort to learn everything you can in the time you have available. Integrate what you learn into your policies as you learn it. Each bit of knowledge that you collect becomes more valuable over time, just like investing for retirement.
--
Sean Lamb
"A day without laughter is a day wasted." -- Groucho Marx
[ Parent ]

A similar question (none / 0) (#33)
by klash on Sat Oct 21, 2000 at 07:13:39 PM EST

I'm in a similar position to the person who posted the article, but my question is a bit different:

The machines dual boot NT and Debian. Since we have identical hardware configurations on all the machines, we simply do a bit-for-bit clone off a master machine's hard drive.

Is there any easy way to admin all of the boxes simultaneously once they're up? Some sort of daemon you can run on each machine, where you can connect and issue shell commands on all the machines simultaneously?



.. (none / 0) (#35)
by ameoba on Sun Oct 22, 2000 at 08:15:11 AM EST

Ah... so you're in the same boat I'm in, WRT to setting up dualboot machines..

I've given some thought to how to do remote aministration of machines that can't be expected to be on, and the best thing I've come up with involves shell scripts on a shared remote drive.

It starts with this bit of shell script (like you're likely to find in /etc/profile), placed at the end of the init process (ie rc.local)

for file in /usr/local/admin/*.sh ; do
if [-x $file]; then
. $file
fi

where /usr/local is shared over NFS (granted, it DOES seem kinda silly to /usr/local be remote, but...). In this directory will be shell scripts do perform whatever administrative modifications need to be done (copying files onto the local filesystem, overwriting config stuff, etc..).

the scripts should have some naming convention that ensures that, for a machine that hasn't booted to Linux for a while, they'll be executed in order (00001.sh, 00002.sh, 00003.sh, ... seems like a good system). the scripts will go approximately like:

1) check for the existance of /var/log/blah/00001.done, if it exists, exit the script

2) if the file doesn't exist, do whatever it is you need to do.

3) create the file in /var/log/blah

If you want, you could make directories in /var/log/blah to keep backup data, or just the output from the commands executed in the script. If the machines will have a meaningful local name (or if you feel like having a list of ethernet MACs), mailing the admin is also an option.

[ Parent ]
pushing commands (none / 0) (#38)
by bloy on Tue Oct 24, 2000 at 11:46:00 AM EST

At my last employer, we did something like this. As a machine was coming up, and once a night via cron, the machine would run all the scripts in /usr/local/daily

Also, the machine would look in /usr/local/once for files that had mod times more recent than the last time the daily script was run, and run those.

This allowed us to have "daily reset the machine" type of commands, for things like creating mail aliases, keeping the ssh host key file up to date, etc. It also allowed us to do things on a once-only basis, like getting a report on filesystem space to be munged into a graph later, or going through and doing a repair to a broken file.

[ Parent ]

Half-way help... (none / 0) (#37)
by Miniluv on Mon Oct 23, 2000 at 03:00:28 AM EST

We JUST implemented something here at work that might work under linux, I dunno. It works on our SCO and Solaris boxes..I myself have not yet touched it, nor managed to corner the admin into. It's called NSH, and briefly explained to me, it allows you to push commands to all the client computers running the agent version of this software. Mind you, this was explained me to me at the tail end of a 14 hour overnight shift in which I wrote a mindnumbing amount of PHP, so I may not have the details right. THough I do know it's called NSH. I tried looking at gnu.org and freshmeat and couldn't find it. If I find out where it's at I'll post it.
"Its like someone opened my mouth and stuck a fistful of herbs in it." - Tamio Kageyama, Iron Chef 'Battle Eggplant'
[ Parent ]
Ideas (none / 0) (#36)
by jodys on Sun Oct 22, 2000 at 12:31:01 PM EST

Ditch the standard lp daemons. Use LPRNG. I have had much better luck with LPRNG/Samba than with lp/Samba. Take a look at LTSP. Which is a rather nice tool for setting up diskless machines, if you think you can get the perfomance out of your network to deal with having the whole system mounted NFS. The new version 2.0 allows you to get the system to boot without a drive, but run all the apps locally (as opposed to remotely, which is the way I use it). As for your pay, ask for the sky then work your way down. Technically what your doing is pretty advanced. And you should be payed well. But that's probably not going to happen.

You just described my job! (none / 0) (#39)
by Jason H. Smith on Mon Oct 30, 2000 at 11:53:06 AM EST

I am the Linux system administrator in the Electrical Engineering lab at a large university. We use Solaris servers for lots of cycles, as well as 29 Linux workstations in the lab, which I administer. We use NIS and NFS to keep things as sane as possible. I started this position just over a month ago, but I have learned a lot of what you are fixing to learn. Fortunately, I inherited the shop from competent administrators who designed the system we use.

First thing's first: Everything must be exactly identical! My biggest day-to-day headache is keeping everything exactly the same. It is a bit of headache, but much better than trying to deal with the entropy of dissimilar systems. Definately use a distribution which supports packages that are easy to update. I am most familiar with Debian, and that's what they already use at the lab. Debian's apt tool is very nice for staying updated (security is a concern here). I am not as familiar with the other distributions, but I'm sure Redhat, SuSE, etc have similar systems. This school uses a lot of Redhat, in general. Packages save you a lot of headache!

When I want to install a fresh OS on all the lab machines (my most recent project), I take one machine and make it look exactly like I want it, and then basically tar the whole bastard onto a CD, which we then dump on each machine. I suggest something like this. It's quicker to make a master and copy it, and you are more likely to have identical systems up. YMMV.

Set up ssh (or rsh, if you know that you don't need any security; ssh is just as easy, however) and configure your identity and authorized_hosts so that you can log in to each machine easily. This makes it easy to use shell scripts which automate the same task on each machine and report back. I find myself frequently sweeping across the Linux network running 'who' and this would be difficult if I had to type 29 passwords in a row.

AFIAK, there is currently no tool which makes concurrent administration easier or more automated. (Somebody please prove me wrong, with hyperlink.) I am seriously considering writing a tool which handles this. I want to type a command, and have a program run it on each machine and tell me what they all said. I'd also like a capability that can tell me which machines did not return the 'status quo' so that I could easily connect to them and fix errors. Sort of like expect, except intuitive knowledge that it's talking to 30 machines at once, and can handle error conditions accordingly. Does anybody else need this tool? Care to help?

Oh, and currently I work 28 hours a week (max allowed, but I probably do more like the low 30s) and find myself pretty busy. But I also help admin the Solaris stuff, and, like I said, it is a busy shop. Still, you might want a few more hours per week.

Oh yeah. Don't rush into implementing anything! Read documentation, read HOWTOS, RTFM. Research everything. Nothing sucks more than a hastily-designed system that you have to deal with. I turn in a weekly report, which is very nice because I can explain all the research and design considerations I've made all week, even though I haven't produced anything material.

Anyway, if you get in touch with me (rot13 my email), I can send you stuff that I use here, like documentation on what we do, ISOs of the install CD we use, in-house tools we've made to generate the CD image, etc. It's a pretty busy lab here, so some of it might be overkill for your tasks, but it couldn't hurt.

Good luck!
Ants. (two by two)

Multiple Computer Administration | 39 comments (38 topical, 1 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!