192
Mathematical
Modelhng
Reports
Geogrl Andysir Vol. 19. No. 4, OFtober 1987
PROGRAMMING THE p-DISPERSION
MODELS FOR FACILITY DISPERSION: AND MAXISUM DISPERSION PROBLEMS MICHAEL J. KUBY
Department
of Geography,
Boston
University,
Boston,
Mass.,
U.S.A.
Abstract-The p-dispersion problem is to locate p facilities on a network so that the minimum separation distance between any pair of open facilities is maximized. This problem is applicable to facilities that pose a threat to each other and to systems of retail or service franchises. In both of these applications, facilities should be as far away from the closest other facility as possible. A mixed-integer program is formulated that relies on reversing the value of the &I location variables in the distance constraints so that only the distance between pairs of open facilities constrain the maximization. A related problem, the maxisum dispersion problem, which aims to maximize the average separation distance between open facilities, is also formulated and solved. Computational results for both models for locating 5 and 10 facilities on a network of 25 nodes are presented, along with a multicriteria approach combining the dispersion and maxisum problems. The p-dispersion problem has a weak duality relationship with the (p - I)-center problem in that one-half the maximin distance in the p-dispersion problem is a lower bound for the minimax distance in the center problem for (p - 1) facilities. Since the p-center problem is often solved via a series of set-covering problems, the p-dispersion problem may prove useful for finding a starting distance for the series of covering problems.
Geogrl AnalysisVol. 19. No. 4, October 1987
THE STATISTICAL MODELING OF FLOW DATA WHEN THE POISSON ASSUMPTION IS VIOLATED RICHARD B. DAVIES Centre
for Applied
Statistics,
University
of Lancaster,
Lancaster,
England
CLIFFORD M. GUY Department
of Town
Planning,
UWIST,
Cardiff,
Wales
Abstract-The Poisson model typically provides a poor fit to flow data but more complex models are difficult to operationalize, especially with production or attraction constraints. Quasi- and pseudolikelihood approaches retain the attractive computational features of the Poisson model. A shopping model example suggests the latter approach to be preferable.
/EEE
Trans. Sofw.
Enpna.
Vol. SE-13, No IO,October 1987
A COMPREHENSIVE MODEL FOR THE DESIGN DISTRIBUTED COMPUTER SYSTEMS
OF
HEMANT K. JAIN School of Business Administration,
University
of Wisconsin-Milwaukee,
Milwaukee,
WI 53201, U.S.A.
Abstract-The availability of micro-, mini- and supercomputers has complicated the laws governing the economies of scale in computers. A recent study by Ein-Dor [7] concludes that it is most effective to accomplish any task on the least powerful type of computer capable of performing it. This change in cost/performance, and the promise of increased reliability, modularity, and better response time has resulted in an increased tendency to decentralize and distribute computing power. But some economic factors, such as the communication expenses incurred and increased storage with distributed systems are working against the tendency to decentralize. It is clear that in many instances the optimal solution will be an integration of computers of varying power.
Mathematical
Modelling
Reports
793
The problem of finding this optimal integration is complex. The designer of such a system may have conflicting objectives, including low investment and operation cost, quick response to user queries, and higher availability of data. Choosing proper alternatives without computational aid may be difficult if not impossible. This paper addresses the distributed computer system design problem of selecting a proper class of processor for each location and allocating data files/databases. The initial design is based on the type and volume of transactions, and number of files expected in the system. A goal programming approach is presented to help the designer arrive at a good design in this multiobjective environment. The problem is formulated as a nonlinear goal programming problem, and a heuristic based on a modified pattern search approach is used to arrive at a good solution. Index Terms-Distributed programming, processing
IEEE
Trans. bromed. Engng Vol. BME-34,
data management, distributed cost, software path length.
No. IO, October
system,
file allocation,
file availability,
goal
1987
CHARACTERIZATION OF THE CORONARY VASCULAR CAPACITANCE, RESISTANCE, AND FLOW IN ENDOCARDIUM AND EPICARDIUM BASED ON A NONLINEAR DYNAMIC ANALOG MODEL YlNG
Department
of Electrical
Engineering,
University
SUN
of Rhode
Island,
Kingston,
RI 02881, U.S.A.
HENRY GEWIRTZ Department
of Medicine,
Cardiology
Section and Brown University RI 02902. U.S.A.
Program
in Medicine,
Providence,
Abstract-An electrical analog model consisting of capacitors, diodes, linear, and nonlinear resistors was used to characterize the coronary pressure-flow relationships from the arterial side of the coronary circulation. Based on this analog model, an identifiable system was formulated whereby the coronary vascular capacitance and resistance in the endocardial and epicardial layer of the heart were estimated. This was done by solving a constrained least-squares problem using a non-linear programming technique. Experimental data were obtained from 28 animal studies using swine with an artificially induced coronary stenosis. The analog model showed a very consistent representation of the coronary hemodynamics. The model also generated accurate estimates of the endocardial to epicardial blood flow ratios compared to those independently measured by the radioactive microsphere technique. The model-predicted epicardial capacitance had a mean of 4.2 x lo-’ ml/mmHg per 100 g tissue; while the endocardial capacitance was negligible in most cases. The result indicated that, in the stenosed coronary circulation of swine, capacitive flow contributes 20% in root-mean-square value to the total flow activity in epicardium; while flow in the endocardium is dominated by a resistive, vascular waterfall effect.
IEEE Trans. biomed. Engng Vol. BME-34.
No. IO, October
AUTOREGRESSIVE SPECTRUM
1987
MODELING OF SURFACE EMG AND ITS WITH APPLICATION TO FATIGUE OMRY PAW and GIDEON F. INBAR
Department
of Electrical
Engineering,
Technion-Israel
Institute
of Technology,
Haifa,
Israel 32000
Abstract-The following is an investigation of the ability of the autoregressive (AR) model to describe the spectrum of the processes underlying the recorded surface EMG. Surface EMG (SEMG) spectrum is influenced by two major factors; one attributed to the motor units (MU) firing rate and the second, the higher frequency one, to the morphology of the action potentials (AP) traveling along the muscle fiber. In the present paper, SEMG measurements were carried out on the biceps brachii muscle with fixed surface electrodes arrangement and isotonic conditions. Sufficient averaging of 0.5 s segments enabled the identification of the low-frequency peak related to the firing rates of the MUs. An AR model was calculated for the signal, since such a model is appropriate for signals with a peaky spectrum. The AR coefficients, the reflection coefficients, and the poles of the AR model were plotted to track their time dependence. Several criteria were used to choose the model’s order. Between 2-7