Second HEPiX Meeting, CERN, Sept 28/29 1992
Minutes from the HEPix meeting held at CERN, September 28th & 29th 1992
========================================================================
These minutes are a collective effort! Many thanks to the people listed
below who volunteered to take notes for a specific session.
Paul Lebrun, Fermilab
Jan Hendrik Peters, DESY
Holger Witsch, ESRF
Maria Dimou-Zacharova, CERN
Lisa Amedeo, Fermilab
Alan Silverman, CERN
Juan Manuel Guijarro, CERN
Rainer Geiges, Inst. f. Nucl. Phys. Univ. Mainz Germany
Paul Cahill, Rutherford
========================================================================
From: Judy Richards, CERN
Subject: Minutes of the Site Reports Session, Part I.
TRIUMF, Corrie Kost
Manpower:
Computing services 4 full-time; DAC 2.5 full time
Hardware:
DEC shop, VMS (about 110 VUPS) and Ultrix (about 426 VUPS) Unix machines
include 18 DEC 3100, 8 DEC 5000, 1 SG Iris (4 cpus), 1 HP720, 4 Suns. Of
the DECstations, one is a fileserver (3Gb), rest standalone with
typically 1Gb disk and Exabyte.
Software:
DEC's Campus-wide Software Licensing agreement.
Libraries - on Unix and VMS - CERN, CMLIB, SLATEC, TRIUMF. IMSL and NAG
on VMS only - expensive.
Use EXPAND from Fermi to manage FORTRAN sources.
Use emacs. Have a few licenses for Mathematica and Framemaker.
Otherwise public domain for Tex, screen dumping,...
Popular 'home grown' plotting package PLOTDATA/EDGR.
Have defined some 'standards' -
8mm tapes, postscript printers (HPlaserjets), NCD X-terminals
Tex, Latex, dvips, xdvi, gs, ghostview
xterm terminal window, tsch shell, emacs
DEC Fortran on DECstations, GNU C
elm, ispell, mxrn, archie, www
Communications:
thin and thick Ethernet. 5Km Optic Fibre, lines to Canada (112Kb) and USA
(256 Kb)
Explosion of network tools - explosion of software on the network -
end-user is swamped! Need to filter and propose subset to end user.
Good function for HEPix??
Key Problems:
System management. Initial set-up 2 days (use RIS); upgrade 2-3 days.
No time to tune.
Poor documentation - incomplete, erroneous, scattered.
X interface development - painful even with GUI builders (X-Designer).
Need user friendly online Help, workable local news service.
Need better/easier network management tools.
Current Trends:
More centralized compute serving, X-terminals instead of workstations
(increased efficiency).
Software explosion.
Faster networking needs more management.
---------------------------------
HEP Computer System, University of Dortmund; Klaus Wacker
Serves 2 physics groups; about 45 people in total.
Previously had Nord 500; replaced by 7 RS6000s delivered in 12/91.
Machine connected by Serial Optical Link. Works so far only for direct
neighbours (3.6 Mb/s for binary ftp). Each machine has 1Gb of staging
disk and some have 3480 drives. Link used to transfer data between tape
and staging disk. Also used for NFS access to file servers. So topology
is such that all (but one) machine have 3480 tape drive locally or on
direct neighbour and login servers are direct neighbours of fileserver.
Running AIX 3.2.1. Use NFS and automount to create host independent
environment
$HOME = /net/xxx/home /wacker
/cern -> /net/yyy/cern
/h1 -> /net/yyy/h1
Use NIS (yp) for user administration.
Operation experience: not smooth at all but routinely in use. Main
problems in networking area - automount, NIS, routing.
Software:
/cern NQS (CERN version) works nicely but load balancing could be
improved. SHIFT tape handling not working yet.
/h1 New experiment. Some portability problems but now all resolved and
programs in use apart from event display (GKS).
/argus old experiment, code not written to be portable. Problematic!
What do we expect from HEPix?
Generally a lot! We are only physicists not full time system
administrators.
Examples:
- Support SHIFT software.
- Help with editors. vi (enough said), emacs (is it worth 20Mb + Lisp?)
uni-xedit (cost) Xedit (too basic). Using aXe - very nice look and
feel. Modeless, mouse or emacs keyboard. X11 based. Recommend it for
X-terminals (and emacs for VT100s ?)
- Recommend and if necessary support Unix software. Often alternative
solutions to same problem. Which one to choose?
---------------------------------
CERN, Alan Silverman
Unix workstation support (not central farms)
About 1000 workstations on site. 5 supported architectures (Sun, Apollo
Ultrix, HP/700, RS/6000). Support aimed primarily at local system admins
not end-users. Six support staff - one primary and one back up per
architecture (plus print server). Central server for system/commercial
software per platform. Funding for some centralized software purchasing
and centralized software support. Problem reporting to centralized
e-mail account per architecture. Also news group per architecture.
CERN tailored guides for installation and simple system management
tasks.
Also try to prepare for the future - Kerberos evaluation, AFS pilot
project, DCE toolkit tests, management tool tests.
Current challenges: never enough staff - currently trying to absorb
RS/6000 support. Spend too much time answering mail and phone. Little
time for producing guides and doing development. Came late to Unix support
- already established experts and religious wars' "my shell/GUI/..."
is better than "your shell/GUI/..."
---------------------------------
Yale University, High Energy Physics, Rochelle Lauer
Difference since last HEPix meeting - systems are now in production -
required uptime, required reliability. Now have DECstations, Silicon
Graphics, VAX/VMS PCs and MACs.
Users working at diverse experimental areas (CERN, Fermi, ...). Need
to be able to tailor their environment.
Goals: minimize system management; common personalized environment;
user tailorable environment.
System Management: invisible to the user
DECstation server
DECstations use enhanced security - Kerberos authentication. SGI
can't participate.
Shared files - /usr/local... scripts, products, man pages
System maintenance - shared crontab, backup, print (yale_print)
Share files as much as possible - if necessary common file calls
specific files.
User Environment:
Vanilla Unix (!?!)
Implement common Unix tools, eg, emacs, xgrabsc,.. (need more)
Make Yale specials obvious eg yale_print
Main products - CERNLIB, emacs, nuTPU, Autocad, IDEAS, modsim, f90
========================================================================
From: Paul Lebrun, Fermilab
Subject: Minutes of the Site Reports Session, Part II.
5th Speaker : John Gordon, from RAL.
Rutherford Lab, U.K., is not only a HEP lab, but also supports
various scientific and engineering activities. Among them :
- Informatics : where large UNIX expertise available, mostly for
consulting and advising.
- Conventional engineering using CAD/CAM packages.
- Starlink, and Astronomy project moving away from VMS towards UNIX.
Particle Physics, HEP, is a group with one to two years of UNIX
expertise. They mostly use VAXStations on their desk and rely on UNIX compute
servers. Currently, they manage 2 Apollo's, two DecStations and one HP750.
The expertise from central computing was helpful, as mentioned earlier,
expertise on various UNIX systems ( RS6000, SUNS, SGI, DEC and Cray) is
available.
The list of supported products is long, as long and tedious as the
previous speakers... Like every onelse... The result is chaos !..
Rather than going through this list, let us mention specific products to
RAL, such HUMPF ( Heterogenous Unix Monte-Carlo Production Facility),
which feature NFS access to various file bases, does not use much NQS, but
has VTP, a package providing file/tape access to the IBM. The OPAL
collaboration was able to get their production going on Apollo's, ALEPH on
DECstations.
Back to the Central service, where the following facitlities are
provided :
- FDDI
- Site Mail Service (PP)
- Office Strategy ( not much going on for the moment).
- IP coordination along with the SERC area ( common UID/GID list).
- Postcript printer management
- Netnews.
Looking at various futur options, such as NQS, AS-IS, standard shells
and profiles for users, backup strategies....
Question : Why choosing PP for mailing service.
--------
Answer : The U.K. strongly recommand to base all WAN on the X400
ptotocall, PP is a good match to X400 technology.
----------------------------------------------------------------------
6th speaker, Yoshiyuki Watase, Report for KEK.
As the previous institution, KEK is not only HEP. The center has a
large computing center, with mainframes ( Hitachi and Fujitsu) and a VAX
cluster.
- Unix in Use : 38 SUN's 34 DEC's, 47 HP's, 3 IBM RS6000, 5 SGI,
5 Sony, 13 " others" against 89 VAX station running VAX VMS.
- Software Support : No central support for desktop workstations
running UNIX, but casual help, advice and so fotrh. They are currently
working on a plan to formalise this support.
What is centrally supported is of course the mainframe Hitachi
running UNIX. In addition 11 DEC's, clustered in a "farm", a IBM RS6000
used as a database server, various networking servers running mostly
TCP/IP and a file server with 4 Gb. disk for software such as CERNLIB.
Service is provided through NFS mount.
- Futur plans : The computer farm will be attached to the mainframe,
in order to provide efficient batch job services. Current plan is simply
NQS, but there is an obvious need for a good batch system and Data serving
system...
----------------------------------------------------------------
7th Speaker : Eric Wassenaar, from NIKHEF
This institute has two distinct sections, Nuclear Physics and
HEP. The latter supports experiments DELPHI, L3, ZEUS and is part of the
ATLAS proposal. In Amsterdam, beside NIKHEF, there is also SARA, a
supercomputer center at the University of Amsterdam.
First some remasks on the WAN, running WCWean; SARA/NIKEHF is part
of the "Ebone" and "EUnet", supporting also Surfnet, Datanet, IXI-EMPB.
Second, the LAN supports Ethernet segments, an Apollo token ring, Ultranet
connected to the IBM at SARA, in order to access data stored in the
robotic tape vault. In addition, they now have FDDI.
The Hardware comprise 5 SUN's, 39 Apollo's, 6 HP-9000, one DECstation,
1 IBMRS6000 and 8 X-terminals. The system management can be characterised
by "strict central management". For instance, upon first boot across the
LAN, local disk will be erased, to be mounted properly. There is global
access via NFS. The accounting is done "cluster wide". The directory structure
seen by the user is uniform across the cluster:
/user/..
/group/...
/stage/...
/cern/bin/...
/usr/local/bin/...
...
No specific names.. These measures lead to a uniform system
environment. The user has also standard "dotfiles", shells commands, uniform
e-mail. TEX is used for documents. They are currently trying out MOTIF.
They have no experience with OSF and/or AFS. Is Kerberos worthwhile ?
There is no Batch system, they are not running NQS. They are handling
bulk data from their experiments, but not through SHIFT, no automatic
way of doing data transfer through ULTRAnet.
Question : How many people are supporting these activities ?
Answer : 8 in the center, including operators, but only two
systems managers.
----------------------------------------------------------------
8th Speaker : Lisa Amedeo from Fermilab.
The conversion from VMS to UNIX at Fermilab started about two
years ago. This process was, at the end "farily easy" for the users.
Since then, we have seen "astronomical" growth in UNIX usage at the Lab.
Two years ago, we had only one SGI farm, by the end of FY91, about 300
worker nodes or I/O server were included in the farms, by the end of
FY92, the site has at least 425 Unix nodes, with roughly 200 of them
included in farms. By the end of this calendar year, we expect to support
about 570 nodes. The number of system manager could not grow as fast,
as a result, one system manager takes care of more than 100 nodes.
FERMILAB currently supports 4 platforms : SGI, RS6000 ( strongest support),
SUN's and DEC'stations.
This has been made possible trhough the establishment of a good
production environment, carefuly designed at various Fermi UniX Environment
meetings. Systems such as :
- UPS : Unix Product Setup, allows for all supported products to be
accessible through the same procedure, with consistent directory structures,
helpfiles, library, version naming and so forth.
- Various DCD utilities such as "addusers", delete users....
- UPD/UPP : UNIX Product Distributions systems.
These tools and other are supported across heterogenous clusters.
Users like this environment, and do request more utilities. Some systems
are also supported by "Local Managers", using the same methodology, and
getting advise from the Unix Support Group.
New developmens :
- FNALU : a centraly managed UNIX system, targeted for first UNIX
users making feasibility studies, supporting specific commercial software
and so forth. Access will be granted on a case by case proposal, this
system will not be opened to "everybody".
- CLUBS : A large UNIX cluster dedicated to conventional BATCH
activities.
- Site mail service, supporting various prtocalls.
HEPiX can really help Fermi, as well as other labs, through information
sharing, exchange of ideas.. We would be able to include some requirements
from other lab, so that these systems can be used elsewhere. Unix can be a
positive experience, but it takes a lot of work and cooperations.
Question/Comment : One has to distinguish between the management of
generic "worker nodes" and "standalone systems". How many System Managers
do you really have for each class ?
Re: Over 250 workstations on site are not part of farms, and their
management is "supervised" by the Unix Support group. But "local"
system management is sometimes available, and recommended where possible.
------------------------------------------------------------------
**** see end of next section for more notes on the topics below. Editor.
Some Notes on the IN2P3 presentation, by John O'Neal.
This is a report on HEPiX-F, where "F" stand for France. This
group comprises/represents 17 laboratories, 1 computing centers and is
highly distributed across the entire French country. This group had
already three sucessful meeting ( January '92, March '92, June '92),
with topics relevant to HEPix, such as comparaison of Hardware, software,
identification of problems, and, more important, formation of working
groups.
The most frequently mentioned concerns are :
- UID/GID unicity
- CERN distribution ( solved !)
- File backups
- WWW needs ?
- coexistence with VMS
- Common User environment ( shelss, dotfiles, editors.. )
- OSF ?
- X-env
....
Current working groups are :
1. Information exchange, comparing/benchmarking WWW and NetNews.
2. CERN development and software distribution.
3. UID/GID standards.
--------------------------------------------------------
UNIX at CCIN2P3, by S. Chisson.
- Proponent of BASTA, a UNIX based cheap Batch system.
See talk presented at CHEP92.
- Self service for physicists on SUN, HP, RS6000.
- Workstation for engineers.
---------------------------------------------------------
SSCL report, presented by L. CORMELL
First a few general comments on the status of the SSC laboratory :
(i) Funding is O.K. : we got $570 millions from Congress for FY93, allowing us
to accomplish "most of what we want". (ii) We had a sucessful test on
superconducting magnets ( 1/2 string running). (iii) With roughly 2,000
people currently at the lab, construction is reall going on !
There is no "single Computing Division" at SSCL, but several computing
Departments :
- Physics Department has a Computing group, comprising roughly 20
people. (PDSF).
- The Central Service Division has a support group comprising roughly
80 people.
- The accelerator has also its own computing facility, based on
SUN workstations and an Intel, 64 nodes HyperCube. They are also using
CRAY time outside the lab ( at SCRI, Florida).
- The CADD community is using Intergraph, yet an other hardware
platform. They support conventional construction engineering for the site.
Alltogether, there are between 700 and 1,000 workstations running
at least 4 UNIX flavors : SUN's, HP's, SGI's and Intergraph. Obviously,
SSCL is not "an old Lab", and has no IBM maniframe to worry about.
Question : How many system managers per workstations ?
Re : In PDSF, there a few system managers for about 100 workstations.
Question : Do you have common tools across platforms in your
environment ?
Re : No, but soon we will try to get something going...
------------------------------------------------------------
LAFEX/CBPF report, presented by Mariano Summerell Miranda.
This center is located in Rios-de-Janeiro, Brasil, and is small
compared to other centers previously mentioned. The history of the
computing facility might help understanding our current situation :
- 1987 : one micro-VAX II, plus about 30 ACP I nodes.
- 1989 : Ethernet and PC's added, running the "Netware" Novell
- 1990 : Vax stations added to form an LAVC, installation of the
ACP II nodes.
- 1991 : SUN's workstation added ( a few), and the center is now
on Internet.
The center supports about 100 users, 50 of them being "remote".
Altogether, the facility has 11 SUN's, 5 VAXstations 3100, 2 microVAXes and
100 nodes ACPII. In addition, 15 IBM PC's, they are currently beta-testing
the NFS on this platform. The LAN is thin wire Ethernet.
The software supported is CERNLIB, CPS ( ACPII farms ), Tex, LaTex,
and of course F77 and c. There is currently very little formal system support,
the end-user must be responsible in this environment. For instance, the user
must know how to reboot his workstation during off hours periods.
The Backups are done by secretaries through "can" shell scripts.
We would like from HEPiX more user documentation and knowledge about
security. However, the goal to create a single user environment for HEPix
seems very difficult.
Hopefully, the center will have more manpower in the coming fiscal
year, allowing to develop more formal tools.
========================================================================
From: Jan Hendrik Peters, DESY
Subject: Minutes of the Site Reports Session, Part III.
Guido Rosso, INFN
INFN consists of 20 sites connected by 2Mb/s and 64kb/s lines. Most of their
users are still on VMS but there are about 150 Unix WS, mostly HP9000/7xx
and DEC 5000/xxx. INFN is waiting for the new alpha stations but is not sure
whether to run them under OSF/1 or VMS.
There is a national committee for computing police - one person from each site.
5 people are working on application benchmarking and one of the sites is
responsible for network management (hardware and software).
Currently it is attempted to identify "standard" tools for the user and to
update the CERN library and tools timely.
On the Unix platform mostly MC programs are run, but it is planned to use
Unix WS also for real-time and CAD applications.
The backup system is based upon DAT and Exabyte.
Les Cottrell, SLAC
At SLAC there are about 170 Unix WS and 3 central servers, 85 X-terminals,
more than 800 ip nodes on 7 subnets. Central support is given to RS6000,
NeXT and Suns, only limited support is available for DEC, SGI and HP.
Programs and tools used are tcsh, emacs, elm, X11R5, Motif, perl, WDSF,
Usenet, nfs, nis, netinfo (on NeXT), automount daemon, afs with nfs translator,
Fortran77, framemaker.
There is an nightly incremental backup of the RS6000 and OS/2 systems to
the STK silo using WDSF on VM. The data file are transported in a compressed
format. NFS is used for NeXT and Sun stations.
The support group for distributed stations will answer any request within
2 days, the working groups have a local coordinator.
Netnews is installed on VM, VMS, Unix, Mac and OS/2 systems. The IBM/VM now
supports ftp, telnet, finger, whois, rsh and X-clients.
Current plans are X-terminal support, channel interface for FDDI via an RS6000,
Unix batch, and the access of the STK silo from Unix.
Wolfgang Friebel, DESY-Zeuthen
DESY consists of two sites in Hamburg and in Zeuthen. The central computing
manpower is 60 and 20 respectively. Hamburg had a smooth transition to Unix
with the introduction of Apollos as graphic devices, now there are DN10000,
RS6000, SGI 4D/460, DEC 5000, HP9000/425, HP9000/425 and lots of X-terminals.
At Zeuthen there used to be IBM clones and now 90% of the systems run Unix:
Convex 3210, RS6000, DEC 5000, HP9000/720. In total there are 215 Unix systems
at DESY, with roughly 40 different user profiles.
On the network side there are 30 ethernet segments, Ultranet to connect the
IBM with the SGI and Cisco box, Apollo and IBM token ring, as well as FDDI.
Current problems are the harmonization of the environments on the various
systems in Hamburg and Zeuthen, topics discussed are shells (tcsh, zsh),
editors (emacs keyboard mapping), printing, user registration, etc. Since
August there is a DESY wide Unix group (DUX) which aims are quite similar to
HEPIX. This group would like a close contact and information exchange with
HEPIX.
John O'Neall, IN2P3
The French supgroup HEPIX-F had 3 meetings in 1992. From the 17 laboratories
and the 1 central computer center of in2p3, 10 sent representatives to the
meeting. The activities covered a compilation of the existing hard- and
software, identification of problems and the formation of working groups.
The concerns mentioned mostly are: user registraion (uid/gid), CERN software
distribution, exchange of ideas and experiences, backup, coexistence with
VMS/VM, user environment, OSF, source management, batch and tapes,
X-terminal management, computer aided publishing.
Working groups have been formed for communications (www, netnews, ...),
CERN software, and database for user registration (implemented on VM under
SQL/DS).
Staffan Ohlsson, IN2P3 Computer Center
Activities of the computer center are in the area of BASTA (batch farm),
and the support of workstations for physicists (Sun, HP, RS6000) and engineers
(RS6000, NeXT).
BASTA runs since July 91 and now is implemented on RS6000 and HP9000/730
(in total 9 stations). The batch queuing system bqs is built on top of nqs
to allow scheduling (more than 85% of the cpu time is used), time limits,
full user control for destination machine. All mods are posix compliant.
The system allows output staging (Ztage) to the mainframe with 600kB/s.
User files on VM and CERN software are nfs mounted. Future plans cover
hanfs (highly available nfs), a general purpose batch cluster, input
staging, AIX/ESA for the mainframe.
Larry Cormell, SSCL
At SSCL the is no central computing division but a physics computer dept,
the nis group and the accelerator computing dept.
Lab wide there are about 2000 PCs, 700 workstations (Sun, HP, Intergraph),
3 VAX clusters (Oracle on one of them), a hypercube (accelerator group),
and the central PDSF (physics data simulation farm)
Currently there are not many common tools.
Mariano Sumerell, LAFEX/CBPF
The institute started off in 87 with uVAX II and ACP, added ethernet and
novel, ACP II, VAX stations, Suns and now has access to internet.
There are about 50 internal and 50 external users who are using the
11 Suns, 5 VAX3100, 2 uVAX, the 100 node ACP II (running Unix), 8-node ACPMAPS,
and 15 IBM PCs. Programs and tools: Cernlib, Fortran77, C, TeX, mail
There is no central support staff, the support has to done by the users
themselves.
========================================================================
From: Holger Witsch, ESRF
Subject: Proceedings of session: Kerberos and other security questions.
Rochelle Lauer (YALE) : Kerberos experience.
Rochelle installed both the HESIOD name service and Kerberos on an
ULTRIX machine. There are one server and four client machines. Her
objectives when installing them were to centralize
the passwd database, to solve known UNIX security problems and to
understand the security problems in order to work around them.
HESIOD provides shared system data files across the different machines
in the network. Kerberos provides authentication and access
validation.
Rochelle found that the whole installation had taken more time than
just installing NIS.
The documentation for Kerberos was poor and unreliable. Personal
support from DEC was only provided by one single expert.
The scripts provided with Kerberos can be put in the following
classes:
o some which were never used.
o some which were used all the time.
Rochelle thinks it is impossible to succeed the installation in the
first attempt. She did installation several times.
Her conclusions are:
o their systems now work reliably, although she feels a lack
of knowledge in the HEP community.
o installing the packages added complexity in her systems,
which makes solving problems more difficult.
o Kerberos caused a considerable amount of additional work.
o she thinks it is worth it.
Her questions to the HEP community are :
o is there anybody out there who has experience with Kerberos?
o what is the future of Kerberos?
o will it be the standard one day?
**********************************************************************
Ben Segal (CERN) : Kerberos experience.
Cern set up KPP (kerberos Pilot Project) from Jun 1992 on.
The motivations were:
o to stop passwords cross the network in clear
o to offer one secure login per day and user.
The latter was rather considered a dream.
The high priority aims were:
o to learn about Kerberos using MIT Kerberos version 4.
This was done by installing the DEC version server on ULTRIX.
o try to port it to SunOS. Help was found at the Imperial
College, since they run a SUN based Kerberos.
o try to port it to HP-UX.
The low priority aims were:
o to port it to AIX
o test TGV's Multinet version of Kerberos on VMS
o Track V5 development, in conjunction with OSF-DCE
development.
The porting on SunOs and Hp-UX went well.
Kerberos Version 4 offers the possibility to run on different
platforms. An advantage of Kerberos is the fact that it is not seen by
the user. Although the administrator has to declare users and machines
to be valid.
Multinet together with version 5 will not be possible because of the
export restrictions of the US (DES).
Cern feels confused with the OSF/DCE prospect.
AFS contains both version 4 and version 5 features of Kerberos.
AFS and Kerberos 4 compatibility is a problem. Don't want to change
manufacturers code but probably forced to.
Conclusions:
o there is a rather small penetration into general use.
Possibly due to the university origin.
o the original aims will not easily be achieved. Cern thinks
it very unlikely in next 1-2 years using 'standard' software.
**********************************************************************
Eric Wassenaar (NIKHEF): Security Questions
Please refer to the paper copy of his speech. Eric wanted to alert the
attendants about security problems by passing a catalog of security
questions with his slides. Also he tried to define what he thinks
security is.
**********************************************************************
Les Cottrell (YALE): ESNet
Les passed his slides in hyper speed. All of his speech is represented
in the copies of his slides.
**********************************************************************
Alan Silverman (CERN): Experiance with COPS & CRACK
Alan thinks the attempt to turn 'normal' users into system
administrators is one of the biggest security problems on big sites.
CRACK:
The passwd file is the most likely security hole. Many passwords are
guessable; once a password is guessed, other hosts are most likely
accessible.
A policy against such security hole could be Kerberos' centralized
password data base and a regular password guessing service. Crack is
public domain. The current version is 4.1.
Crack guessed about 25 % of the passwords at CERN. The dictionary
provided with Crack was expanded by CERN with French, German and
Italien words.
Crack needs about 40 minutes to go through a password file on a
HP9000/710.
COPS:
Intended to check the security in the filesystems, it will search the
entire filesystem and check permissions and false write permissions in
files like /etc/passwd, /etc/hosts.equiv, $HOME/.rhosts $HOME/.netrc
and $HOME/.X*.
It is highly portable and easy to configure. Also being small, it will
not consume big amounts of diskspace. With all its power, it will not
make any changes in the filesystem, but write warning into mails or files.
========================================================================
From: dimou@dxcoms.cern.ch (Maria Dimou-Zacharova. CERN)
Subject: Minutes of the WWW session of the HEPix meeting
Monday Sept. 28th 1992
Chair: John Gordon
Speaker: W. van Leeuwen
Title: "How to hook into WWW in a simple way"
History of WWW at NIKHEF:
- Linemodebrowser was first installed (11/91) to access xfind, xnews, who
and alice from CERNVM without having a CERNVM account.
Access to SLACVM for spires and freehep followed.
- A daemon (server) was installed on 02/92, that allowed keyword search &
access from the outside.
- The prompts are prepared in html (HyperText Markup Language) which is a
SGML extension with anchors pointing to other documents.
- The actual search is done by a shell script name.sh relevant to name.html
that manages the display.
Conclusion:
HEPix information should be available through WWW and maintained up-to-date.
---------------------------------
Speaker: T. Berners-Lee
Title: WAIS, Gopher, WWW comparison
All three are client-server information systems but they follow different
conceptional models of data.
WAIS is composed by an index of files on server. The server indicates the
documents. Clients can interrogate more than one servers. Originally designed
for Dow Jones , software provided by Thinking Machines for Apple computers.
It is entirely Unix based. Full text search is performed according to
an ISO-like protocol on top of TCP/IP. No navigation possible.
Gopher was developed for the Univ. of Minessota users' help desk.
It is menu driven and accesses documents linked by pointers in a graph topology.
WWW is resembling to Gopher but hypertext driven.
Search: offered by all of them.
Navigation: not offered by WAIS.
Hypertext based: WWW only.
Standard: WAIS for the protocol it uses and WWW for the doc. format (SGML).
Deployement: WAIS about 200 sites (due to publicity), Gopher about 200 sites,
WWW about 20 sites. (Tim has since revised this number to 4o sites - Editor)
Aim: WAIS for information retrieval, Gopher for a campus wide information
system and WWW for Computer Supported Cooperative Work.
>From WWW one can access Gopher and WAIS information.
Discussion on this session:
A. Silverman claimed the deployement gap is too wide (factor 10) and a nice
W-window interface is necessary for WWW to make it attractive.
M. Sendall proposed choosing WWW and building up gateways to WAIS to profit
from the WAIS information base.
There is a prototype now that will be a product begining 1993 and will
allow TeX and Postscript by format negociation between client-server.
P. Lebrun believes that in the end good quality information will have to
be charged. He requested integraded hypertext and graphics.
The chairman asked the audience to confirm intention to use WWW.
As many HEPix people use LaTeX and Framemaker for doc. preparation
A. Silverman proposes that WWW offered such interfaces.
J-P. Matheys suggests that WWW should only point to the document location and
anonymous ftp be used for fetching.
The chairman suggested that everybody made available documents of public
interest ideally through WWW or, at least, on the HEPix list.
To get WWW documentation telnet info.cern.ch or mail www-bug@info.cern.ch.
========================================================================
From: Lisa Amedeo
Organization: Fermilab Unix System Support Group
Subject: Notes from the X-topics session:
First speaker: Thomas Finnern (DESY)
Topic: X11 at DESY, Xterminal configuration and login support
Thomas said that there are three groups looking into X11 topics
at DESY. Their emphasis has been split into 3 areas: Xterm support,
user support, and installations.
Information on installations:
DESY supports NCD's, IBM RS6000's, HP's and Apollo's. So they need
servers to support various aspects like booting, fonts etc.
They has also set up Xterms with a complete TCP/IP configuration.
There are backup boot servers out of each subnet. The type of base
configuration you want/need determines the configuration server
you will use. They are also trying to keep server interdependancies
small.
Information on user support:
There is very little need for user interference. All customizations
are X11 compliant. Logins are setup with an XDM login panel and
Motif window interface. They have set up a "virtual" keyboard and
font names so you can have an environment dependant login.
Xsessions are very flexible for differnet environments. X11init calls
Xsession with init only option.
---------------------------------
Second speaker: Paul Lebrun (FERMI)
Topic: Experience with Visualization Systems based on Motif and X.
There are several commercial visualizaton packages including Seian,
PV-wave, AVS and Explorer. UNIX is the only platform on which HEP
visualization can be done due to availability of resources. This
sort of visualization provides a good analysis tool. Examples of
places visualizaion can be used:
Magnetic field analysis
Detectors, particles and fields can all be represented on one
window.
Many body problems.
Problems associated with visualizaton include hardware and software
costs, most software is proprietary, there are few good open systems
and there is no standard emerging.
---------------------------------
Third speaker: Phillippe Defert (CERN)
Topic: Experience with tk/tcl
Phillipe gave details about WISH which is a shell built on TCL,
TCLX and TX. WISH has an interface builder that builds a wish
script that produces the interface you want. He gave three examples
the first was xf2wish which is an x interface for f2h.
The second example was epip. This is a product installtion procedure
that generates a shell script to do what you want with the product.
The third example was tkwww which is a hypertext x interface for www.
========================================================================
From: Alan Silverman
Subject: Posix update
FNAL has 2 reps in POSIX groups.
POSIX.7 works on system administration issues such as printer management,
based on Palladium, and software administration, based on HPs software
installation package. Ballots are planned for both of these in 1993.
POSIX.10 deals with differences between scientific computing and
general computing. Also discussing Removable Media (tape) proposal.
This work may split off into a separate group.
POSIX.15 deals with batch. Ballot process just getting started. SLAC
also participates in this.
POSIX.8 transparent file access (includes things like NFS, RFS). Should
go to a second round of ballot in Spring 93 after resolving objections
from the first round.
POSIX.1.LI language independent version of POSIX.1. Official ballot
now closing.
POSIX.2, shells and utilities, expected to be formally approved this
month by IEEE.
========================================================================
From: Judy Richards
Subject: OSF Update
Unix wars are over! Unix world must unite in order to stand up to the
threat of Windows/NT!
OSF isn't very big, about 300 people to look after all business aspects
(negotiating with software suppliers, publicity, trade shows, validation,
trademarking,...) as well as write and test code. Poor code quality in
early releases is major concern. Program to make improvements in this area
showing first signs of success.
Motif 1.2, based on X11R5, released by OSF this summer. Expect to see
from our suppliers 6-9 months later. Includes better font support, drag
and drop capability.
DCE consists of threads, naming service, time service, remote procedure call,
security service and distributed file system (DFS). Set of tools for building
distributed applications. Most important to HEP - security and DFS?
DCE 1.0, released Dec 191, very poor quality, no DFS. Release 1.0.1, released
Jul 92, improved quality, basic DFS (but probably enough for most of us).
Should start seeing products in 1993.
DME, Distributed Management Environment, consists of 'Framework' and
'Distributed Services'. Distributed Services are 'ready to use' applications
and should be released by OSF to our suppliers in mid 93. They include
- event handling - notification, logging, filtering,...
- licensing - NetLS from HP with PC support from Gradient (PC Ally)
- software distribution and installation - from HP (with IBM support)
- printing - Palladium from MIT
The 'Framework is based on object oriented technology which comes from
Tivoli Wisdom, IBM Data Engine (used to manage NSF network) and HP OPen
View with Postmaster. There will also be a Motif based Management User
Interface so that applications should have the same 'look and feel'.
========================================================================
From: guijarro@sunsoft.cern.ch (Juan Manuel Guijarro)
Subject: AFS notes
We had two short presentations about AFS experiences at CERN (by
R. Toebicke) and by the ESNET AFS Task Force (by Les Cottrell):
AFS is a networked, distributed file system which has some
advantages over NFS. The main characteristics are:
1.It offers central administration and distributed administration
tools. It is organized on AFS cells (which include AFS servers
and clients).
2.The users see an uniform world wide file system space, without
needing to know the file server name.
3.It performs file caching (It does not just transfer files, it
works with chunks). The chunk size can be set in a cell basis.
4.It provides Kerberos authentication server, protection database,
backup database and volume replication.
At CERN, 2 servers had been set (a HP 720 and a IBM RS6000,
both with 2GB disk space) and clients on machines running HP-UX,
SunOS, Aix, Domain_OS, Ultrix and NeXT. As the servers have
different architectures they can not backup each other (in case of
failure), therefore it would be suitable to have server with the
same architecture.
ESnet had established an AFS Task force which had made
recommendations regarding local implementation of AFS at ESnet sites
(AFS cell names, management duties, cache size, etc....). They also
had proposed roles and responsibilities for NERSC and tried to educate
the ESnet community about AFS. The final report about this experience
is available from nic.es.net as /afs/report.ps.
Although transition from NFS to AFS is not easy, it is already
possible to enable NFS clients to access AFS files. Additionally,
Transac provides tools to move from NFS to AFS.
AFS only can be used to access files that are in UNIX file systems
(not VMS file systems, etc..). Caching is mandatory but one can change
cache size.
It seems that AFS will be intensively use in the future. A lot of
people are interested on it, but few had bought licenses. It has
a main advantage, it will be part of DCE (since DFS is an upgrade of
AFS).
========================================================================
From: geiges@asterix.kph.uni-mainz.de
(Rainer Geiges, Inst. f. Nucl. Phys. Univ. Mainz Germany)
Subject: Session Sep. 29th 10:45 - 12:30 System Management Tools
Corrie Kost (Triumf): Management, file-serving and tools for ULTRIX
At Triumf a DECStation 5000/240 is used as a central server for providing
system services like Internet Name Serving, RIS, YP, X-Term boot service
and Network File Service. No users are allowed to login to this machine.
Group software is also put on this server and is maintained by managers
from the different groups.
The "/usr/local/bin" directory contains logical links to all "bin" direc-
tories which host the different software packages. To access all SW pack-
ages a single reference to the directory "triumfcs/usr/local/bin" needs
to be established by an NFS client.
A set of libraries is also centrally maintained (CERNLIB, CMLIB, SLATEC).
For documentation Framemaker and LATEX are used as tools. Printing
service is provided by the HPRINT package. For graphics output of the
analysis data a plotpackage called PLOTDATA together with a powerfull
graphics editor "EDGR" are used.
For code management, since there are only small groups, the KISS package
is used.
As user network tools the availability of WWW and XARCHIE is regarded as
very helpfull. A common problem in this area is the question of where to
find the latest version of a PD software package on the net.
>From the management point of view , it is hard to keep all SW up to date
because of lack of manpower. Setting up new stations takes quite long
(2days/station). Due to the lack of time no system tuning is done which
is not a major problem as long as the system has enough spare power.
The NFS configuration shows some limitations.
The documentation of some 3rd party peripherals is incomplete and/or
erroneous and/or too scattered which is cumbersome for the system manager.
Conclusions:
A server helps the management, but UNIX/ULTRIX must be made simpler to
manage. There exists an installation guide fow new stations and a list of
tools available at the site.
---------------------------------
Les Cottrell (SLAC): Managing Program Binaries at SLAC
The user environment at SLAC is setup as overlay to the vendors base OS.
Home directories of the users are located on a server and are distributed
through the net with NFS and the AMD automounter. The users have netwide
UID and group ID for UNIX, VM and VMS stations.
The program binaries are replicated via the lfu tool on different machines
(slave server) to increase the resilience against server failure and to
distribute the load.
The SW packages can be maintained by users without root privileges and
each package can be controlled and distributed separately. The management
is organized by installing a username "Mpackage" (e.g. Memacs) and account
for each SW package. Each account owns ACL's for the maintainers of the
package, this authorized group of people does the managment by "rlogin" to
the Mpackage account. Mail addresses "maint-package", "help-package" and
"guru-package" are available for users to send mail to.
Small packages are directly installed on "/usr/local" and large ones in
"/usr/local/package".
One master server exists for each package, slave servers are updated
automatically over night. Clients mount "/usr/local" from a nearby server.
Each slave server has a consistent view of the file system either by a
replication of files or by a link to the master. The automounter can choose
among a list of servers and does a fallback to alternates in case of fail-
ure. The replication of files on the slave servers shares the load and
increases the resiliance of the system. Backups are done via the IBM
stations using the WDSF (workstation data safe facility).
The "lfu" program is used to distribute the files to the slave servers. The
operation is driven from a script that contains a list of conditions and
actions.
Problems:
Because of the time delay between installation onto the master and
the propagation to the slave, a tool to allow the maintainer to "push" or
a user to "pull" a new version would be helpfull. If by mistake an instal-
lation is done on a slave, it gets overwritten at night. AMD has no stubs
for ls and file viewers/browsers. There are also security concerns about
the rlogin to Mpackage accounts.
Conclusions:
The tools described have been used for about 1 year now. About 180 stations
and 70 packages are managed by some 20 people. Lfu is sufficiently flexible
to customize packages on the different platforms. The uniform environment is
appreciated by the users. The netwide UID allocation scheme has been very
successfull.
---------------------------------
Lisa Amedeo (FERMI) Massively Parallel System Management
The UNIX installation at FERMI consists of large workstation farms (about
570 WS in the near future). The WS farms are managed by 5 fulltime system
managers. Fermilab has developed management tools (Fermi Modular Backup,
Farm Console System Software) and has standardized the UNIX environment.
The FMB system was developed by the UNIX support group according to a wish
list from users and system managers. The system had to offer the possibility
to use the tape services provided by the FNAL installantion, it had to
support different platforms and various archive formats. Tapes must be read-
able on any platform independent of the archive format.
The FMB system exists now in Version 3.5 on SGI, Indigo, IBM, SUN, HP and
DEC stations. The package comprises 9 user callable scripts and several
internal scripts to provide special functionallity. It is still neccessary
to write a users guide for the package and the tape mount scheme will be
expanded to provide additional features. FMB will then be released for
more general use.
The Farm Console System Software, based on a client server set, was developed
to manage the large SGI and IBM farms at Fermi. It allows serial connections
to individual notes in the farm and also the execution of commands on all
nodes or a subset of nodes in the farm. The FCSS logs the output, displays
errors and status from each node. There exist scripts to do management tasks
across the whole farm. The requirements for the console server machine are
a BSD UNIX system with many serial ports and a network interface.
The UNIX environment at FERMI was standardized to provide a common user
interface and common system management tools accross all platforms. There
are tools to support distribution and installation of local and 3rd party
SW via UPS, UPP and UPD. Furthermore there exist site wide naming conventions
for local and NFS mounted file systems and standards for automount variables.
Questions: Can a user do a restore operation with FMB by himself?
Answer: The user must call the system manager to ge a backup restored.
Q: What are the security concerns with FMB?
A: The backup server must have root privilege on all client nodes.
Q: Is WDSF used at FERMI.
A: The unix support group uses only FMB, other groups at FERMI may use
different backup tools.
---------------------------------
Alan Silverman (CERN): Backup strategies and tools
It was reported on the currently at CERN available backup tools. Some of them
are under test and others are in daily use. A backup tool should support
multiple platforms, tape robots and stackers, it must be user friendly and
must spare resources. The traditional UNIX tools have drawbacks like poor
user interfaces and heavy network load backing up NFS disks.
A list of some of the tested or used products is given:
DECnsr does not yet work correctly and some of the features are still missing.
The index files for the archive are too large (about 10% of the saved data.
OMNIBACK is only available on Apollo and HP systems, but clients for SUN are
announced. It is currently used at CERN to backup some 25 Apollos and one
NFS mounted ULTRIX system.
WDSF (DFDSM) is an IBM product running on a VM machine as server, clients
exist for IBM, PC's and SUN, porting to ULTRIX and HP-UX is in progress.
Network and CPU resources are saved by compressing the data before the trans-
fer. The data are stored on cartridges in a robot. The system is in use at
CERN for 30 SUN's and 20 RS6000. Each night about 150MB of data are trans-
fered. The catalog for 9GB of stored data is 800MB large. It is a purely
central solution which is not always wanted by the users of WS.
Legato Networker, a commercial package, runs on SUN as server and clients
for SUN, ULTRIX and HP-UX exist. It supports Summus jukeboxes. There is
little experience with it at CERN, but it is widely used elsewhere.
DELTA BUDTOOL is another commercial package which only provides a nice user
interface to the traditional standard UNIX tools. There is little experience
with it at CERN, but it is widely used elsewhere.
In conclusion a tool which satisfies all wishes does not exist yet. The test-
ing of the mentioned tools will continue. The users should form clusters and
choose local solutions with the most appropriate tool.
WDSF cannot offer a total service for all UNIX files on site, but the cover-
age of the WDSG service will increase at CERN.
Q: Are there systems which can do a quick restore of deleted files?
A: In the Athena project system the deleted files are moved to tmp file space
and can be recovered.
Q: Was a recovery from a desaster ever done?
A: Yes, but it takes time and needs many robot cartridge mounts.
Q: The catalog disks are a vulnerable point, is mirroring of the disks done?
A: No, but the catalog is backedup regularly.
Lisa Amedeo (FERMI): CLUBS Central large Unix Batch System
Fermilab is currently configuring a large WS cluster with a high performance
data server to allow the processing of several TB of data by batch jobs.
About 40TB of user data were aquired in 90-91 and a power of 5000VUP's is
needed to process them. The system must provide a large and robust I/O system
remote managment capabilities and it must be scalable to a size of 2 - 4 times
large.
As data server an existing AMDAHL 5890 is used with an STK silo and an 8mm I/O
subsystem and an array of disks for staging. The data server is connected to
the WS cluster of RS6000 and SGI stations by an ultranet hub.
The batch system, which will be implemented, is based on the CONDOR SW package.
The condor system provides the batch functionality on the stations, and the
data are provided by the NEEDFILE SW package. The need file package will do
the tape staging and file caching and organizes the data transfer.
The system is currently beeing implemented and is foreseen to run by end of
92.
Q: Why was CONDOR choosen as batch system.
A: Several systems were tested, Condor seems to be the best choice. The hunt-
ing for CPU is turned off, check pointing is only done on user request.
Condor appears to be the most complete package.
========================================================================
From: PAC@IBM-B.RUTHERFORD.AC.UK (Paul Cahill)
Subject: Bulk data handling
Afternoon session of Tuesday 29th September 92
VTP - Virtual Tape Protocol Paul Cahill - RAL
RAL have developed a mechanism for remotely accessing tapes using IP over a
network. In RAL's case the tapes are held in a Storagetak Tape Silo, managed
by an IBM 3090.
A library of routines, written in C, the Virtual Tape Protocol (VTP), allow
users to write their own applications to read and write tapes available on
the IBM.
A VTP server machine runs on the 3090 under VM, and caches files to be read
and written to 6GBs of disc space. Up to 30 notion tapes may be simultaneously
accessed in this way, using on four physical drives.
The client implimentation exists for Unix and VMS platforms and supports a
variety of tape block and label types. A generic tape application, called
TAPE, has been produced to perform most tape IO operations.
It is hoped to turn VTP into a device driver and also impliment clients on
VM/CMS and Servers on UNIX.
Accessing Bilk Sequential Data: Tapes and robots Lisa Amedeo - FERMI
FERMI provide a Fortran callable C library for tape access called RBIO. The
low level tape interface uses ioctl modular code. This machanism is used
mainly from they workstation farms of SGI and IBM machines.
RBIO is currently available for IRIX 3.2, 3.3.x, 4.0.x, IBM AIX 3.x and VMS.
This mechanism is used to provides Exabyte access from Unix. A SCSI II
interface has been used to provide workstation access to their STK robot
tape silo.
Overview of CORE software Frederic Hemmer - CERN
CORE software derives from SHIFT (Scalable Hetrogeneous Integrated Facility)
designed to deal with high volume, parallel jobs, producing large amounts
of data. CORE consists of the SHIFT software, CSF Simulation Facility, tape
and disk services, with a CISCO routers directing data.
Also provided: Tape Copy Scheduler, Unix Tape Control (tape daemon), Remote
File Access System and RFIO.
Current software is running on the following platforms; IBM RS/6000, SGI,
Hp, Sun, and AIX under VM.
A disc pool manager (DPM) is currently at version 2.0 and, using RFIO,
the HP machines are providing half the power of the VM service.
Robotic Tape Server for distributed Unix. Youchei Morita - KEK
Using a JSD computing farm for CPU intensive, remote I/O jobs KEK required
large scale storage. Using 10 DEC 500/125 as compute engines, another as
a file server using NFS, and a third as a controller, a 580GB Exabyte
carosel has been developed. The device uses a EXB-120 tape robot, with
4 drives and 116 8mm tapes. This is connected to an ethernet via both
the DEC 5000/125 and a VMS/OS9 device providing 5GB at 500KB per second.
A self-service approach was adopted and so a cost effective mechanism has
been produced for about $100/GB.
CLIO (low overhead I/O from IBM to workstation) Chris Jones - CERN
CLIO, Clustered I/O Package, used to provide low overhead, high throughput
for connection of RS/6000 workstations to ES9000.
CLIO provides a multi-threaded, distributed processing environment with
process control and data conversion. CLIO is currently available for
MVS to VM to AIX communication.
CLIO Performance Tests at DESY Dieter Lueke - DESY
CLIO in operation at DESY has been used to connect an RS/6000 model 550
to an ES9000/440 with memory to memory connections without data
conversion. MVS to/from AIX.
Such transfers took only between 0.25% and 2% of a CPU rather than around
5% with Ultranet and 30% using TCP/IP version @ on the ES9000. Therefore,
no bottleneck is encountered at either end due to connection related
processing.
VMS tape staging Jamie Shiers - CERN
Remote tape staging to VMS using FATMEN? VAXTAP is not going to be used at
CERN. but queue scheduling for 3480s is possible on VMS. This is not really
a UNIX related things so ......
========================================================================
From: alan@vxcern.cern.ch (Alan Silverman)
Subject: HEPiX Wrap-Up Summary
Summary of HEPiX Wrap-Up Meeting, CERN, 29th Sep 92
---------------------------------------------------
Following discussions held at the CHEP92 (Computers in High Energy
Physics) meeting in Annecy and during the two days of the HEPiX meeting
in CERN September 28th and 29th, it has been decided to restructure HEPiX
along the following lines.
In future, specific groups will be established to target specific areas
and named coordinators will be sought for each group. In addition, we
will form regional chapters for HEPiX, initially one European and the
other North American. Other areas may follow later as interest builds
up; in the meantime, interested parties in such areas are welcome to
attach themselves to whichever chapter they prefer and will be invited to
participate in any activities they wish in either chapter. The volunteer
coordinators in these chapters will be jointly responsible for the
general organisation of HEPiX and its working groups.
With regard to meetings, it was agreed to try to hold at least one
regional meeting per year and a full world-wide meeting once every 12 to
18 months. The final decision between 12 and 18 months will be taken at
the first regional meetings next Spring. The choice is between an annual
worldwide meeting and a meeting to coincide with the CHEP schedule which
has now been set at 18 months between meetings.
The first regional meetings should be targetted for Spring 1993, one in
Europe and one in the US or Canada. Although regional in scope,
interested institutes will be free to send people to either, or both. In
addition, some effort will be made by each chapter to send at least one
representative to the other's meeting. At that time, the coordinators
will decide, based on such aspects as interest level, on whether to
schedule a second set of regional meetings for late 1993 and hold the
next worldwide conference around the time of CHEP94 (planned for Spring
'94 in the San Francisco area) or whether to advance this larger meeting
to Fall 1993.
The various groups, their target areas, and the people responsible for
each are as follows ---
European Coordination
---------------------
Alan Silverman, CERN, E-mail - ALAN@VXCERN.CERN.CH
North American Coordination
---------------------------
Judy Nicholls, FNAL, E-mail - NICHOLLS@FNAL.FNAL.GOV
Common HEP UNIX Environment
---------------------------
It was agreed that it would be useful to study if some form of common
working environment could be produced among HEP UNIX sites. This could
ease the life of physicists who migrate among the labs and the
Universities as well as assist UNIX administrators in providing some
guidelines. What constitutes the environment is not fully defined
(shell, startup scripts, environment variables, window interface, tools,
etc, etc). Whether some useful consensus can actually be achieved is
also open to question; we may be too late in defining this, too much
history out there; different sites may have vastly different constraints.
Nevertheless, an attempt will be made. It was emphasised that whatever
results, if any, will not be compulsory and not necessarily the default
environment. The idea is simply to produce something that works across
multiple platforms and is available if so desired by migratory users and
others.
The initial work will be coordinated by Wolfgang Friebel, DESY, E-mail
friebel@ifh.de with help from John Gordon, RAL, E-mail jcg@ib.rl.ac.uk
and Alan Silverman, CERN, E-mail ALAN@VXCERN.CERN.CH. Evaluations of
their results should be performed by different sites, both large and
small, and Rochelle Lauer, Yale, E-mail lauer@yalehep.bitnet has already
offered help.
Documentation Collections
-------------------------
The idea of this exercise is to make available to HEPiX members all
documentation on the use of UNIX in HEP which might be of interest. Such
documentation will NOT be collected in a central site, it being accepted
that it would be continually out of date. Instead, a central registry
will be built up consisting of pointers. This registry will be made
available worldwide via the use of WWW. The work will be coordinated by
Judy Richards, CERN, E-mail JUDY@CERNVM.CERN.CH.
Tools Database
--------------
Similar to the previous group, this will consist of a set of pointers to
UNIX- based tools and utilities which are of particular interest to HEP
users. Once again, the tools themselves will not be stored centrally,
only the pointers. Along with these pointers, the group may try to
solicit reviews, hints, critiques, etc. And once again the database will
be made available via WWW and FREEHEP. Carrie Kost promised to try to
seed the database with a list of tools used at TRIUMF; his E-mail address
is kost@erich.triumf.ca. After the meeting, Robert Bruen, MIT, E-mail
Bruen@MITLNS.MIT.EDU, offered to try to coordinate this group. MICHAEL
OGG of CARLETON University, Ottawa, E-mail ogg@physics.carleton.ca, had
already offered during CHEP92 to try to find someone to help.
NQS Extensions
--------------
It had been made clear during CHEP92 that many sites had independently
produced enhancements to the basic NQS package. John O'Neall, IN2P3,
E-mail jon@frcpn11.in2p3.fr offered to study if some merging could be
performed.
Frequently Asked Questions (FAQ)
--------------------------------
>From our experiences during the first year of HEPiX, it is clear that we
need to publicise ourselves more. It was agreed that we will use both
the existing USENET news group (HEPNET.HEPIX) and the existing E-mail
distribution list (HEPIX@HEPNET.HEP.NET) since not every interested
member uses one or the other. Alan Silverman and Judy Richards have
agreed to produce an FAQ about HEPiX which they will try to keep up to
date and publish at regular intervals, approximately monthly. (Although
not able to be present to agree at the time, we hope Judy Nicholls will
also provide input for the FAQ.)
The meeting closed with a plea by the organisers of the various groups
for aid in whatever way seemed appropriate - offers of talks at meetings,
lists of documentation and tools, tools reviews, input and suggestions
for a common environment, etc. Finally, the meeting gave a warm vote of
thanks to Judy Richards who had arranged the two-day CERN meeting so
successfully.
Alan Silverman
14 October 1992