scispace - formally typeset
Open AccessJournal ArticleDOI

A CAMAC-Based Intelligent Subsystem for Atlas Example Application: Cryogenic Monitoring and Control

Reads0
Chats0
TLDR
A subunit of the CAMAC accelerator control system of ATLAS for monitoring and, eventually, controlling the cryogenic refrigeration and distribution facility is under development, the first application of a philosophy of distributed intelligence which will be applied throughout the ATLAS control system.
Abstract
A subunit of the CAMAC accelerator control system of ATLAS for monitoring and, eventually, controlling the cryogenic refrigeration and distribution facility is under development. This development is the first application of a philosophy of distributed intelligence which will be applied throughout the ATLAS control system. The control concept is that of an intelligent subunit of the existing ATLAS CAMAC control highway. A single board computer resides in an auxiliary crate controller which allows access to all devices within the crate. The local SBC can communicate to the host over the CAMAC highway via a protocol involving the use of memory in the SBC which can be accessed from the host in a DMA mode. This provides a mechanism for global communications, such as for alarm conditions, as well as allowing the cryogenic system to respond to the demands of the accelerator system.

read more

Content maybe subject to copyright    Report

CC&F-850504—45
DE85 012372
RECEIVED
BY
OSTi JUNO3
1985
A
CAMAC-BASED INTELLIGENT SUBSYSTEM
FOR
ATLAS
EXAMPLE
APPLICATION:
CRYOGENIC MONITORING
AND
CONTROL
R.
Pardo,
Argonne National
Laboratory,
Argonne,
IL
60439
USA,
Y.
Kawarasaki,
JAERI,
Ibaraki,
Japan,
and
K.
Uasnlewski,
Argonne National
Laboratory,
Argonne,
IL
60439
USA
Abstract
A subunit
of the
CAMAC accelerator control system
of ATLAS
for
monitoring
and,
eventually, controlling
the cryogenic refrigeration
and
distribution facility
Is under development. This development
is the
first
application
of a
philosophy
of
distributed intelligence
which will
be
applied throughout
the
ATLAS control
system.
The
control concept
is
that
of an
intelligent
subunit
of the
existing ATLAS CAMAC control highway.
A
single board computer resides
in an
auxiliary crate
controller which allows access
to all
devices within
the crate.
The
local
SBC can
communicate
to the
host,
over
the
CAMAC highway
via a
protocol involving
the use
of memory
in the SBC
which
can be
accessed from
the
host
in a DMA
mode. This provides
a
mechanism
for
global communications, such
as for
alarm conditions,
as
well
as
allowing
the
cryogenic system
to
respond
to the
demands
of the
accelerator system.
Introduction
ATLAS
[1], the
Argonne Tandem-Linac Accelerator
System,
is a
major expansion
of an
existing heavy-ion
accelerator facility which consists
of an
electrostatic
Van
de
Graaff tandem accelerator
and a
prototype super-
conducting linear booster accelerator.
The
supercon-
ducting linac control system
is a
CAMAC-based system
with
an
enhanced Digital Equipment
PDP 11/34
computer
as
the
central control computer.
The
CAMAC system
is
configured
cs a
byte-serial multi-crate highway inter-
faced
to the
central computer
via a
serial highway
driver residing
in a
unibus memory-mapped crate.
The ATLAS project places
a
significantly Increased
load
on the
accelerator control system. This effect
occurs both
due to the
increased number
of
devices
which must
be
monitored
and
controlled
but
also because
the much more complex accelerator system requires more
computing support
in
order
to
allow
the
staff, which
has
not
increased
in
size significantly,
to
efficiently
operate
the
f
;:ility.
The ATLAS facility contains
a
number
of
complex
subsystems which
can
significantly benefit from
increased automation
and
computer control. Subsystems
such
as ion
sources, cryogenic distribution system,
a
superconducting dipole switching magnet,
and sub-
sections
of the
superconducting linac
are
examples
of
complex subsystems which require
or can
dramatically
benefit from close supervision
and
control through
a
computer system.
We
report
in
this paper
the
first
application
of a
philosophy
of
expansion using single
hoard couputers
ia
.specific CAriAC craces
to
acnieve
distributed intelligence
in the
ATLAS control system.
System Design Considerations
There
sre
many possible approaches which
may be
used
to
achieve such
a
goal
and the
proper choice
is
often strongly influenced
not
onlyl
by
local personal
biases
but the
environment, both hardware
and
software,
which presently exists.
The following points were requirements which
the
enhanced system
has
been designed
to
satisfy:
I.
We
felt that
the
major investment
in
hardware
and software which
had
already gone into making
our
facility
one of the
moat automated heavy-ion facilities
existing could
not be
abandoned. Therefore,
the
constraint
of
building
on the ' . ing
system
was a
requirement.
2.
The
load
on the
central computer
due to
simple
monitoring tasks should
be
minimized
in
order
to
free
that computer
for
more complex tasks such
as
program
development, high level calculations, data management,
and human Interfacing.
3.
The
congestion
on the
serial highway should
be
kept
as low as
possible
to
allow
for
future activities.
It.
The
reliability
of the
monitoring
and
control
of
the
subsystems mentioned before should
be
high.
This requirement essentially rules
out the use of the
central computer since
its
reliability
Is
compromised
by
the
higher failure rate
of
disc drives, line
printers, terminals
and
other peripherals,
not to
mention system crashes that occur during such
activities
as
program development.
5.
The
operation
of the
these major subsystems
locally should
not be
dependent
on the
operation
of the
overall CAMAC highway
or the
central control computer.
The solution which
has
been adopted
at
ATLAS
is to
add local Intelligence
In any
CAMAC crate which
contains
ths
interfacing hardware
for a
particular
subsyst*:n. Such
a
configuration
is
shown
in
Figure
1.
32
CHANNEL
ADC
DfiTfl-
WAY
COMH
LINK
CRATE
CONTBXLEB
WITH
SBCM/21
:£-
Fig.
1.
Hardware configuration
for
remote intelligence
applications
in
ATLAS control system.
STRI3UTICM
OF
TCP,
KT-ZZ
IS
BMU

The local microprocessor is Interfaced Into the desired
CAHAC crate through the use of the auxiliary crate
controller protocol. Many of the functions of the
remote processor require no interface to the central
computer. Communication with the central computer is
through the CAMAC highway using a CAMAC-MA mode into
the micro-processor's memory. This nechanism allows
the use of LAH's as interrupts for both CPU's as well
as allowing essentially transparent polling of system
status for data logging In the control room, direct
communication to the operator fron a remote site
(home),
and central data base updates. By maintaining
all systems as integral parts of the serial highway, it
is possible to develop more complex, seldom used
routines which use various hardware components of the
subsystem in a way that is often transparent to the
local microprocessor. An example of this last feature
Is the measurement of the Q of a resonator using the
cryogenic thermometry which is actually part of a
microprocessor subsystem.
All choices have disadvantages.
of this approach Include:
The disadvantages
1. The elaborate libraries of control functions
which have been developed for our RSX-based system
cannot be used in the microprocessor environment we
have chosen. Therefore program development has been
slower than might have been otherwise possible.
2.
Each subsystem is essentially limited to one
CAMAC crate. This disadvantage can be overcome with
the use of a local branch highway and Is being done in
our ion source subsystem, but it is not possible for a
microprocessor to access another crate on the main
accelerator CAMAC serial highway except through an
elaborate communication sche.ne involving the central
computer.
Hardware and Software Implementation
The microprocessor chosen for implementing this
concept was the Digital Equipment SBC 11/21 (Falcon)
single board computer. The Falcon is easily con-
figurable in a number of memory modes. This allows
development and debugging using RAH memory which can
latsr be converted into PROM memory allowing complete
stand-alone operation independent of the rest of the
control system. The Fsicon resides in and Is inter-
faced into the CAMAC crate through a Kinetics System
3921 crate controller operating in the auxiliary
mode.
The CAMAC-DMA unit is a Kinetics System 3825
Da taway Communications Link. This system is installed
in a CAHAC crate which is a part of the accelerator
control system serial highway.
The software development environment selected is
Pascal with parallel features similar to Module. This
software runs under the RT-11 operating system and has
features quite similar to DEC's Micropower Pascal. The
software runs in the Falcon with no operating system
and, in the final version, will execute immediately
upon bootup.
During the development period, which we are still
in,
the executable code is complied and linked on the
program development computer system and then downline
loaded into the SBC 11/21 using the features available
in the ODT PROM on the 11/21. The debugging cycle can
actually include execution of the code on the
development computer under RT-11 prior to testing in
the target system. This feature is accomplished via
switches in the compile and link cycles.
The Cryogenic Monitoring and Control System
The cryogenic monitoring and control system is the
first application of the design philosophy described
above.
The implementation of this system has allowed
us to break the task into three phases which will be
Fig. 2.
Terminal interface for ATLAS
monitoring and control subsystem.
cryogenic Fig.
3. ATLAS cryogenic monitoring and control CAHAC
crate showing auxiliary crate controller
housing single board computer.

described below. Phase I is essentially complete and
Phase II is in the early stages of development.
The ATLAS cryogenic system [2] consists of two
separate liquid helium refrigerators (an additional one
Is
planned),
an elaborate liquid helium distribution
system with a total length of nearly 200 feet, two
large helium dewars, and an associated liquid nitrogen
distribution system. This system provides the
necessary cooling for ATLAS which consists of 47
superconducting resonators, 22 superconducting sole-
noids,
and a superconducting beam switching dipole
magnet. The total number of parameters which should be
monitored iu the system is raoTe than seventy. These
parameters are temperatures, pressures, flow rates,
liquid levels, compressor operation, and some miscel-
laneous information. Presently the system operates in
an intensive manual mode. The only automatic feature
Is the control of heaters, located in the main dewar
reservoirs, based on cither liquid level, pressure, or
flow rate. Unfortunately the complexity of the system
causes this simple regulation to be an inadequate
control under varying load conditions of the accel-
erator, thereby requiring intensive human interaction
for most situations.
This research w.'s supported by the U.
Department of Energy under Contract U-31-109-Eng-38.
S.
References
[1] J. Aron, et al. "Status
Project", this conference.
of the ATLAS
12] J. M. Nixon and L. «. 3ollinger, "Cooling the
Argonne National Laboratory Superconducting
Heavy-Ion Llnar. with Two Refrigerators in
Parallel", Advances in Cryogenic Engineering,
27,
(1982) 579.
The Phase I control system development goals were
to implement a stand-alone monitoring system for the
ATLAS cryogenic system. The parameters to be inter-
faced and the basic software required are now in place
and have been functioning for three months. A local
terminal is provided which provides graphical and
textual display of system temperatures, pressures, and
liquid levels. The terminal keyboard is used to
provide input of allowed ranges, to activate limit
checks on desired parameters, and control the displayed
information. The terminal is Interfaced through one of
the two serial ports available on the Falcon. The
terminal interface is shown in Figure 2 and the CAMAC
crate housing the micro-processor is shown in Figure 3.
The system is presently running from RAM memory but it
will be converted soon to PROM memory configuration for
the program and RAM for the data regions only.
The goal for Phase II is to provide communication
back to the central control computer to allow data
logging, global alarm condition broadcasting, and
remote site communication. This phase of development
will employ the dataway communication* link, a CAMAC-
DMA device which allows IMA access to the memory of the
local microprocessor in order to communicate and to
retrieve data concerning the subsystem parameters. For
data logging functions, the central computer can
extract Information from the microprocessor memory
without the need for any communication software
Interface. Similarly, for error condition reporting
the microprocessor can interrupt the central CPU by
setting a LAM in the CAMAC crate and loading the error
inforaation into a predefined region of utaory.
Therefore in Phase II communication the problems of
asynchronous communication can be avoided. The need
for such communication will surface later in the Phase
III period when actual automatic control is attempted
and the need for remote operator Input nay become
desirable. Phase II implementation should occur during
the next six Months.
The third phase will be to implement certain
control features on the cryogenic system. A more
elaborate heater control allowing regulation based on a
number of parameters and some Additional control of
specific valves and loading of compressors may also be
included. The details of this effort will require
intensive implementation and testing cycles which await
the full implementation of Phases I and II.
p
>
P3
References
More filters

Cooling the Argonne National Laboratory superconducting heavy-ion linac with two refrigerators in parallel

J.M. Nixon, +1 more
TL;DR: The Argonne Superconducting Heavy Ion Linac is described in this article with appropriate history and illustration, including a side view of a cryostat and the helium flow system with a plan view of superconducting linac cryostates and helium distribution system diagrammed.
Related Papers (5)