The need to anticipate the consequences of policy decisions becomes ever more important as the magnitude of the potential consequences grows. The multiplicity of connections between the components of society and the economy makes intuitive assessments extremely unreliable. Agent-based modeling has the potential to be a powerful tool in modeling policy impacts. The direct mapping between agents and elements of society and the economy simplify the mapping of real world functions into the world of computation assessment. Our modeling initiative is motivated by the desire to facilitate informed public debate on alternative policies for how we, as a nation, provide healthcare to our population. We explore the implications of this motivation on the design and implementation of a model. We discuss the choice of an agent-based modeling approach and contrast it to micro-simulation and systems dynamics approaches.
The use of combined imagery from different imaging sensors has the potential to provide significant performance improvements over the use of a single image sensor for beyond-the-fence detection and assessment of intruders. Sensing beyond the fence is very challenging for imagers due to uncertain dynamic and harsh environmental conditions. The use of imagery from varying spectral bands can alleviate some of this difficulty by providing stronger truth data that can be combined with truth data from other spectral bands to increase detection capabilities. Imagery fusion of collocated, aligned sensors covering varying spectral bands [1,2,3] has already been shown to improve the probability of detection and the reduction of nuisance alarms. The development of new multi-spectral sensing algorithms that incorporate sensors that are not collocated will enable automated sensor-based detection, assessment, localization, and tracking in harsh dynamic environments. This level of image fusion will provide the capability of creating spatial information about the intruders. In turn, the fidelity of sensed activities is increased resulting in opportunities for greater system intelligence for inferring and interpreting these activities and formulating automated responses. The goal of this work is to develop algorithms that will enable the fusion of multi-spectral data for improved detection of intruders and the creation of spatial information that can be further used in assessment decisions.
This report documents the results for the FY07 ASC Integrated Codes Level 2 Milestone number 2354. The description for this milestone is, 'Demonstrate level set free surface tracking capabilities in ARIA to simulate the dynamics of the formation and time evolution of a weld pool in laser welding applications for neutron generator production'. The specialized boundary conditions and material properties for the laser welding application were implemented and verified by comparison with existing, two-dimensional applications. Analyses of stationary spot welds and traveling line welds were performed and the accuracy of the three-dimensional (3D) level set algorithm is assessed by comparison with 3D moving mesh calculations.
Constitutive models for computational solid mechanics codes are in LAME--the Library of Advanced Materials for Engineering. These models describe complex material behavior and are used in our finite deformation solid mechanics codes. To ensure the correct implementation of these models, regression tests have been created for constitutive models in LAME. A selection of these tests is documented here. Constitutive models are an important part of any solid mechanics code. If an analysis code is meant to provide accurate results, the constitutive models that describe the material behavior need to be implemented correctly. Ensuring the correct implementation of constitutive models is the goal of a testing procedure that is used with the Library of Advanced Materials for Engineering (LAME) (see [1] and [2]). A test suite for constitutive models can serve three purposes. First, the test problems provide the constitutive model developer a means to test the model implementation. This is an activity that is always done by any responsible constitutive model developer. Retaining the test problem in a repository where the problem can be run periodically is an excellent means of ensuring that the model continues to behave correctly. A second purpose of a test suite for constitutive models is that it gives application code developers confidence that the constitutive models work correctly. This is extremely important since any analyst that uses an application code for an engineering analysis will associate a constitutive model in LAME with the application code, not LAME. Therefore, ensuring the correct implementation of constitutive models is essential for application code teams. A third purpose of a constitutive model test suite is that it provides analysts with example problems that they can look at to understand the behavior of a specific model. Since the choice of a constitutive model, and the properties that are used in that model, have an enormous effect on the results of an analysis, providing problems that highlight the behavior of various constitutive models to the engineer can be of great benefit. LAME is currently implemented in the Sierra based solid mechanics codes Adagio [3] and Presto [4]. The constitutive models in LAME are available in both codes. Due to the nature of a transient dynamics code--e.g. Presto--it is difficult to test a constitutive model due to inertia effects that show up in the solution. Therefore the testing of constitutive models is primarily done in Adagio. All of the test problems detailed in this report are run in Adagio. It is the goal of the constitutive model test suite to provide a useful service for the constitutive model developer, application code developer and engineer that uses the application code. Due to the conflicting needs and tight time constraints on solid mechanics code development, no requirements exist for implementing test problems for constitutive models. Model developers are strongly encouraged to provide test problems and document those problems, but given the choice of having a model without a test problem or no model at all, certain requirements must be kept loose. A flexible code development environment, especially with regards to research and development in constitutive modeling, is essential to the success of such an environment. This report provides documentation of a number of tests for the constitutive models in LAME. Each section documents a separate test with a brief description of the model, the test problem and the results. This report is meant to be updated periodically as more test problems are created and put into the test suite.
The Library of Advanced Materials for Engineering (LAME) provides a common repository for constitutive models that can be used in computational solid mechanics codes. A number of models including both hypoelastic (rate) and hyperelastic (total strain) constitutive forms have been implemented in LAME. The structure and testing of LAME is described in Scherzinger and Hammerand ([3] and [4]). The purpose of the present report is to describe the material models which have already been implemented into LAME. The descriptions are designed to give useful information to both analysts and code developers. Thus far, 33 non-ITAR/non-CRADA protected material models have been incorporated. These include everything from the simple isotropic linear elastic models to a number of elastic-plastic models for metals to models for honeycomb, foams, potting epoxies and rubber. A complete description of each model is outside the scope of the current report. Rather, the aim here is to delineate the properties, state variables, functions, and methods for each model. However, a brief description of some of the constitutive details is provided for a number of the material models. Where appropriate, the SAND reports available for each model have been cited. Many models have state variable aliases for some or all of their state variables. These alias names can be used for outputting desired quantities. The state variable aliases available for results output have been listed in this report. However, not all models use these aliases. For those models, no state variable names are listed. Nevertheless, the number of state variables employed by each model is always given. Currently, there are four possible functions for a material model. This report lists which of these four methods are employed in each material model. As far as analysts are concerned, this information is included only for the awareness purposes. The analyst can take confidence in the fact that model has been properly implemented and the methods necessary for achieving accurate and efficient solutions have been incorporated. The most important method is the getStress function where the actual material model evaluation takes place. Obviously, all material models incorporate this function. The initialize function is included in most material models. The initialize function is called once at the beginning of an analysis and its primary purpose is to initialize the material state variables associated with the model. Many times, there is some information which can be set once per load step. For instance, we may have temperature dependent material properties in an analysis where temperature is prescribed. Instead of setting those parameters at each iteration in a time step, it is much more efficient to set them once per time step at the beginning of the step. These types of load step initializations are performed in the loadStepInit method. The final function used by many models is the pcElasticModuli method which changes the moduli that are to be used by the elastic preconditioner in Adagio. The moduli for the elastic preconditioner are set during the initialization of Adagio. Sometimes, better convergence can be achieved by changing these moduli for the elastic preconditioner. For instance, it typically helps to modify the preconditioner when the material model has temperature dependent moduli. For many material models, it is not necessary to change the values of the moduli that are set initially in the code. Hence, those models do not have pcElasticModuli functions. All four of these methods receive information from the matParams structure as described by Scherzinger and Hammerand.
Full coupling of the Calore and Fuego codes has been exercised in this report. This is done to allow solution of general conjugate heat transfer applications that require more than a fluid flow analysis with a very simple conduction region (solved using Fuego alone) or more than a complex conduction/radiation analysis using a simple Newton's law of cooling boundary condition (solved using Calore alone). Code coupling allows for solution of both complex fluid and solid regions, with or without thermal radiation, either participating or non-participating. A coupled physics model is developed to compare to data taken from a horizontal concentric cylinder arrangement using the Penlight heating apparatus located at the thermal test complex (TTC) at Sandia National Laboratories. The experimental set-up requires use of a conjugate heat transfer analysis including conduction, nonparticipating thermal radiation, and internal natural convection. The fluids domain in the model is complex and can be characterized by stagnant fluid regions, laminar circulation, a transition regime, and low-level turbulent regions, all in the same domain. Subsequently, the fluids region requires a refined mesh near the wall so that numerical resolution is achieved. Near the wall, buoyancy exhibits its strongest influence on turbulence (i.e., where turbulence conditions exist). Because low-Reynolds number effects are important in anisotropic natural convective flows of this type, the {ovr {nu}{sup 2}}-f turbulence model in Fuego is selected and compared to results of laminar flow only. Coupled code predictions are compared to temperature measurements made both in the solid regions and a fluid region. Turbulent and laminar flow predictions are nearly identical for both regions. Predicted temperatures in the solid regions compare well to data. The largest discrepancies occur at the bottom of the annulus. Predicted temperatures in the fluid region, for the most part, compare well to data. As before, the largest discrepancies occur at the bottom of the annulus where the flow transitions to or is a low-level turbulent flow.
There is a need in security systems to rapidly and accurately grant access of authorized personnel to a secure facility while denying access to unauthorized personnel. In many cases this role is filled by security personnel, which can be very costly. Systems that can perform this role autonomously without sacrificing accuracy or speed of throughput are very appealing. To address the issue of autonomous facility access through the use of technology, the idea of a ''secure portal'' is introduced. A secure portal is a defined zone where state-of-the-art technology can be implemented to grant secure area access or to allow special privileges for an individual. Biometric technologies are of interest because they are generally more difficult to defeat than technologies such as badge swipe and keypad entry. The biometric technologies selected for this concept were facial and gait recognition. They were chosen since they require less user cooperation than other biometrics such as fingerprint, iris, and hand geometry and because they have the most potential for flexibility in deployment. The secure portal concept could be implemented within the boundaries of an entry area to a facility. As a person is approaching a badge and/or PIN portal, face and gait information can be gathered and processed. The biometric information could be fused for verification against the information that is gathered from the badge. This paper discusses a facial recognition technology that was developed for the purposes of providing high verification probabilities with low false alarm rates, which would be required of an autonomous entry control system. In particular, a 3-D facial recognition approach using Fisher Linear Discriminant Analysis is described. Gait recognition technology, based on Hidden Markov Models has been explored, but those results are not included in this paper. Fusion approaches for combining the results of the biometrics would be the next step in realizing the secure portal concept.
The sintering behavior of Sandia chem-prep high field varistor materials was studied using techniques including in situ shrinkage measurements, optical and scanning electron microscopy and x-ray diffraction. A thorough literature review of phase behavior, sintering and microstructure in Bi{sub 2}O{sub 3}-ZnO varistor systems is included. The effects of Bi{sub 2}O{sub 3} content (from 0.25 to 0.56 mol%) and of sodium doping level (0 to 600 ppm) on the isothermal densification kinetics was determined between 650 and 825 C. At {ge} 750 C samples with {ge}0.41 mol% Bi{sub 2}O{sub 3} have very similar densification kinetics, whereas samples with {le}0.33 mol% begin to densify only after a period of hours at low temperatures. The effect of the sodium content was greatest at {approx}700 C for standard 0.56 mol% Bi{sub 2}O{sub 3} and was greater in samples with 0.30 mol% Bi{sub 2}O{sub 3} than for those with 0.56 mol%. Sintering experiments on samples of differing size and shape found that densification decreases and mass loss increases with increasing surface area to volume ratio. However, these two effects have different causes: the enhancement in densification as samples increase in size appears to be caused by a low oxygen internal atmosphere that develops whereas the mass loss is due to the evaporation of bismuth oxide. In situ XRD experiments showed that the bismuth is initially present as an oxycarbonate that transforms to metastable {beta}-Bi{sub 2}O{sub 3} by 400 C. At {approx}650 C, coincident with the onset of densification, the cubic binary phase, Bi{sub 38}ZnO{sub 58} forms and remains stable to >800 C, indicating that a eutectic liquid does not form during normal varistor sintering ({approx}730 C). Finally, the formation and morphology of bismuth oxide phase regions that form on the varistors surfaces during slow cooling were studied.
A new rotary MEMS actuator has been developed and tested at Sandia National Laboratories that utilizes a linear thermal actuator as the drive mechanism. This actuator was designed to be a low-voltage, high-force alternative to the existing electrostatic torsional ratcheting actuator (TRA) [1]. The new actuator, called the Thermal Rotary Actuator (ThRA), is conceptually much simpler than the TRA and consists of a gear on a hub that is turned by a linear thermal actuator [2] positioned outside of the gear. As seen in Figure 1, the gear is turned through a ratcheting pawl, with anti-reverse pawls positioned around the gear for unidirectional motion (see Figure 1). A primary consideration in the design of the ThRA was the device reliability and in particular, the required one-to-one relationship between the ratcheting output motion and the electrical input signal. The electrostatic TRA design has been shown to both over-drive and under-drive relative to the number of input pulses [3]. Two different ThRA designs were cycle tested to measure the skip rate. This was done in an automated test setup by using pattern matching to measure the angle of rotation of the output gear after a defined number of actuation pulses. By measuring this gear angle over time, the number of skips can be determined. Figure 2 shows a picture of the ThRA during testing, with the pattern-matching features highlighted. In the first design tested, it was found that creep in the thermal actuator limited the number of skip-free cycles, as the rest position of the actuator would creep forward enough to prevent the counter-rotation pawls from fully engaging (Figure 3). Even with this limitation, devices were measured with up to 100 million cycles with no skipping. A design modification was made to reduce the operating temperature of the thermal actuator which has been shown in a previous study [2] to reduce the creep rate. In addition, changes were made to the drive ratchet design and actuation direction to increase the available output force. This new design was tested and shown to operate in one case out to greater than 360 million cycles without any skipping, after which the test was stopped without failure. The output force was also measured as a function of input voltage (Figure 4), and shown to be higher than the previous design. The maximum force shown in the figure is a limit of the gauge used, not the actuator itself. Continued work for this design will focus on understanding the actuator performance while driving a load, as all current tests were performed with no load on the output gear.
A recent report on criticality accidents in nuclear facilities indicates that human error played a major role in a significant number of incidents with serious consequences and that some of these human errors may be related to the emotional state of the individual. A pre-shift test to detect a deleterious emotional state could reduce the occurrence of such errors in critical operations. The effectiveness of pre-shift testing is a challenge because of the need to gather predictive data in a relatively short test period and the potential occurrence of learning effects due to a requirement for frequent testing. This report reviews the different types of reliability and validity methods and testing and statistical analysis procedures to validate measures of emotional state. The ultimate value of a validation study depends upon the percentage of human errors in critical operations that are due to the emotional state of the individual. A review of the literature to identify the most promising predictors of emotional state for this application is highly recommended.
The present paper explores group dynamics and electronic communication, two components of wicked problem solving that are inherent to the national security environment (as well as many other business environments). First, because there can be no ''right'' answer or solution without first having agreement about the definition of the problem and the social meaning of a ''right solution'', these problems (often) fundamentally relate to the social aspects of groups, an area with much empirical research and application still needed. Second, as computer networks have been increasingly used to conduct business with decreased costs, increased information accessibility, and rapid document, database, and message exchange, electronic communication enables a new form of problem solving group that has yet to be well understood, especially as it relates to solving wicked problems.
This report summarizes the major research and development accomplishments for the late start LDRD project (investment area: Enable Predictive Simulation) entitled 'Atomically Engineering Cu/Ta Interfaces'. Two ultimate goals of the project are: (a) use atomistic simulation to explore important atomistic assembly mechanisms during growth of Cu/Ta multilayers; and (b) develop a non-continuum model that has sufficient fidelity and computational efficiency for use as a design tool. Chapters 2 and 3 are essentially two papers that address respectively these two goals. In chapter 2, molecular dynamics simulations were used to study the growth of Cu films on (010) bcc Ta and Cu{sub x}Ta{sub 1-x} alloy films on (111) fcc Cu. The results indicated that fcc crystalline Cu films with a (111) texture are always formed when Cu is grown on Ta. The Cu films are always polycrystalline even when the Ta substrate is single crystalline. These polycrystalline films are composed of grains with only two different orientations, which are separated by either orientational grain boundaries or misfit dislocations. Periodic misfit dislocations and stacking fault bands are observed. The Cu film surface roughness was found to decrease with increasing adatom energy. Due to a Cu surface segregation effect, the Cu{sub x}Ta{sub 1-x} films deposited on Cu always have a higher Cu composition than that used in the vapor mixture. When Cu and Ta compositions in the films are comparable, amorphous structures may form. The fundamental origins for all these phenomena have been studied in terms of crystallography and interatomic interactions. In chapter 3, a simplified computational method, diffusional Monte Carlo (dMC) method, was developed to address long time kinetic processes of materials. Long time kinetic processes usually involve material transport by diffusion. The corresponding microstructural evolution of materials can be analyzed by kinetic Monte Carlo simulation methods, which essentially simulate structural evolution by tracing each atomic jump. However, if the simulation is carried out at a high temperature, or a jump mechanism with a very low energy barrier is encountered, the jump frequency may approach the atom vibration frequency, and the computational efficiency of the kinetic Monte Carlo method rapidly decreases to that of a molecular dynamics simulation. The diffusional Monte Carlo method addresses the net effects of many atom jumps over a finite duration, kinetically controlled process. First, atom migration due to both random and non-random jumps is discussed. The concept of dMC is then introduced for random jump diffusion. The validity of the method is demonstrated using several diffusion cases in one-, two- and three-dimensional spaces, including the dissolution of spinodal structures. The application of the non-random diffusion theory to spinodal decomposition is also demonstrated.
An experiment is proposed which will compare the effectiveness of individual versus group brainstorming in addressing difficult, real world challenges. Previous research into electronic brainstorming has largely been limited to laboratory experiments using small groups of students answering questions irrelevant to an industrial setting. The proposed experiment attempts to extend current findings to real-world employees and organization-relevant challenges. Our employees will brainstorm ideas over the course of several days, echoing the real-world scenario in an industrial setting. The methodology and hypotheses to be tested are presented along with two questions for the experimental brainstorming sessions. One question has been used in prior work and will allow calibration of the new results with existing work. The second question qualifies as a complicated, perhaps even wickedly hard, question, with relevance to modern management practices.
Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.
Ongoing research at Sandia National Laboratories has been in the area of developing models and simulation methods that can be used to uncover and illuminate the material defects created during He bubble growth in aging bulk metal tritides. Previous efforts have used molecular dynamics calculations to examine the physical mechanisms by which growing He bubbles in a Pd metal lattice create material defects. However, these efforts focused only on the growth of He bubbles in pure Pd and not on bubble growth in the material of interest, palladium tritide (PdT), or its non-radioactive isotope palladium hydride (PdH). The reason for this is that existing inter-atomic potentials do not adequately describe the thermodynamics of the Pd-H system, which includes a miscibility gap that leads to phase separation of the dilute (alpha) and concentrated (beta) alloys of H in Pd at room temperature. This document will report the results of research to either find or develop inter-atomic potentials for the Pd-H and Pd-T systems, including our efforts to use experimental data and density functional theory calculations to create an inter-atomic potential for this unique metal alloy system.
Discrete models of large, complex systems like national infrastructures and complex logistics frameworks naturally incorporate many modeling uncertainties. Consequently, there is a clear need for optimization techniques that can robustly account for risks associated with modeling uncertainties. This report summarizes the progress of the Late-Start LDRD 'Robust Analysis of Largescale Combinatorial Applications'. This project developed new heuristics for solving robust optimization models, and developed new robust optimization models for describing uncertainty scenarios.
The performance and the reliability of many devices are controlled by interfaces between thin films. In this study we investigated the use of patterned, nanoscale interfacial roughness as a way to increase the apparent interfacial toughness of brittle, thin-film material systems. The experimental portion of the study measured the interfacial toughness of a number of interfaces with nanoscale roughness. This included a silicon interface with a rectangular-toothed pattern of 60-nm wide by 90-nm deep channels fabricated using nanoimprint lithography techniques. Detailed finite element simulations were used to investigate the nature of interfacial crack growth when the interface is patterned. These simulations examined how geometric and material parameter choices affect the apparent toughness. Atomistic simulations were also performed with the aim of identifying possible modifications to the interfacial separation models currently used in nanoscale, finite element fracture analyses. The fundamental nature of atomistic traction separation for mixed mode loadings was investigated.
Remotely-fielded unattended sensor networks generally must operate at very low power--in the milliwatt or microwatt range--and thus have extremely limited communications bandwidth. Such sensors might be asleep most of the time to conserve power, waking only occasionally to transmit a few bits. RFID tags for tracking or material control have similarly tight bandwidth constraints, and emerging nanotechnology devices will be even more limited. Since transmitted data is subject to spoofing, and since sensors might be located in uncontrolled environments vulnerable to physical tampering, the high-consequence data generated by such systems must be protected by cryptographically sound authentication mechanisms; but such mechanisms are often lacking in current sensor networks. One reason for this undesirable situation is that standard authentication methods become impractical or impossible when bandwidth is severely constrained; if messages are small, a standard digital signature or HMAC will be many times larger than the message itself, yet it might be possible to spare only a few extra bits per message for security. Furthermore, the authentication tags themselves are only one part of cryptographic overhead, as key management functions (distributing, changing, and revoking keys) consume still more bandwidth. To address this problem, we have developed algorithms that provide secure authentication while adding very little communication overhead. Such techniques will make it possible to add strong cryptographic guarantees of data integrity to a much wider range of systems.
DBTools is comprised of a suite of applications for manipulating data in a database. While loading data into a database is a relatively simple operation, loading data intelligently is deceptively difficult. Loading data intelligently means: not duplicating information already in the database, associating new information with related information already in the database, and maintaining a mapping of identification numbers in the input data to existing or new identification numbers in the database to prevent conflicts between the input data and the existing data. Most DBTools applications utilize DBUtilLib--a Java library with functionality supporting database, flatfile, and XML data formats. DBUtilLib is written in a completely generic manner. No schema specific information is embedded within the code; all such information comes from external sources. This approach makes the DBTools applications immune to most schema changes such as addition/deletion of columns from a table or changes to the size of a particular data element.
The 9/30/2007 ASC Level 2 Post-Processing V&V Milestone (Milestone 2360) contains functionality required by the user community for certain verification and validation tasks. These capabilities include loading of edge and face data on an Exodus mesh, run-time computation of an exact solution to a verification problem, delivery of results data from the server to the client, computation of an integral-based error metric, simultaneous loading of simulation and test data, and comparison of that data using visual and quantitative methods. The capabilities were tested extensively by performing a typical ALEGRA HEDP verification task. In addition, a number of stretch criteria were met including completion of a verification task on a 13 million element mesh.
In 2005, over 33% of all the vehicles reported stolen in the United States occurred in the four southwestern border states of California, Arizona, New Mexico, and Texas, which all have very high vehicle theft rates in comparison to the national average. This report describes the utilization of 'bait vehicles' and associated technologies in the context of motor vehicle theft along the southwest border of the U.S. More than 100 bait vehicles are estimated to be in use by individual agencies and auto theft task forces in the southwestern border states. The communications, tracking, mapping, and remote control technologies associated with bait vehicles provide law enforcement with an effective tool to obtain arrests in vehicle theft 'hot spots'. Recorded audio and video from inside the vehicle expedite judicial proceedings as offenders rarely contest the evidence presented. At the same time, law enforcement is very interested in upgrading bait vehicle technology through the use of live streaming video for enhanced officer safety and improved situational awareness. Bait vehicle effectiveness could be enhanced by dynamic analysis of motor theft trends through exploitation of geospatial, timeline, and other analytical tools to better inform very near-term operational decisions, including the selection of particular vehicle types. This 'information-led' capability would especially benefit from more precise and timely information on the location of vehicles stolen in the United States and found in Mexico. Introducing Automated License Plate Reading (ALPR) technology to collect information associated with stolen motor vehicles driven into Mexico could enhance bait vehicle effectiveness.
Downhole sonar surveys from the four active U.S. Strategic Petroleum Reserve sites have been modeled and used to generate a four-volume sonar atlas, showing the three-dimensional geometry of each cavern. This volume 4 focuses on the West Hackberry SPR site, located in southwestern Louisiana. Volumes 1, 2, and 3, respectively, present images for the Bayou Choctaw SPR site, Louisiana, the Big Hill SPR site, Texas, and the Bryan Mound SPR site, Texas. The atlas uses a consistent presentation format throughout. The basic geometric measurements provided by the down-cavern surveys have also been used to generate a number of geometric attributes, the values of which have been mapped onto the geometric form of each cavern using a color-shading scheme. The intent of the various geometrical attributes is to highlight deviations of the cavern shape from the idealized cylindrical form of a carefully leached underground storage cavern in salt. The atlas format does not allow interpretation of such geometric deviations and anomalies. However, significant geometric anomalies, not directly related to the leaching history of the cavern, may provide insight into the internal structure of the relevant salt dome.
For applications such as force protection, an effective decision maker needs to maintain an unambiguous grasp of the environment. Opportunities exist to leverage computational mechanisms for the adaptive fusion of diverse information sources. The current research employs neural networks and Markov chains to process information from sources including sensors, weather data, and law enforcement. Furthermore, the system operator's input is used as a point of reference for the machine learning algorithms. More detailed features of the approach are provided, along with an example force protection scenario.
Sandia National Laboratories (SNL) is conducting pilot scale evaluations of the performance and cost of innovative water treatment technologies aimed at meeting the recently revised arsenic maximum contaminant level (MCL) for drinking water. The standard of 10 {micro}g/L (10 ppb) is effective as of January 2006. The pilot tests have been conducted in New Mexico where over 90 sites that exceed the new MCL have been identified by the New Mexico Environment Department. The pilot test described in this report was conducted in Anthony, New Mexico between August 2005 and December 2006 at Desert Sands Mutual Domestic Water Consumers Association (MDWCA) (Desert Sands) Well No.3. The pilot demonstrations are a part of the Arsenic Water Technology Partnership program, a partnership between the American Water Works Association Research Foundation (AwwaRF), SNL and WERC (A Consortium for Environmental Education and Technology Development). The Sandia National Laboratories pilot demonstration at the Desert Sands site obtained arsenic removal performance data for fourteen different adsorptive media under intermittent flow conditions. Well water at Desert Sands has approximately 20 ppb arsenic in the unoxidized (arsenite-As(III)) redox state with moderately high total dissolved solids (TDS), mainly due to high sulfate, chloride, and varying concentrations of iron. The water is slightly alkaline with a pH near 8. The study provides estimates of the capacity (bed volumes until breakthrough at 10 ppb arsenic) of adsorptive media in the same chlorinated water. Adsorptive media were compared side-by-side in ambient pH water with intermittent flow operation. This pilot is broken down into four phases, which occurred sequentially, however the phases overlapped in most cases.
Downhole sonar surveys from the four active U.S. Strategic Petroleum Reserve sites have been modeled and used to generate a four-volume sonar atlas, showing the three-dimensional geometry of each cavern. This volume 3 focuses on the Bryan Mound SPR site, located in southeastern Texas. Volumes 1, 2, and 4, respectively, present images for the Bayou Choctaw SPR site, Louisiana, the Big Hill SPR site, Texas, and the West Hackberry SPR site, Louisiana. The atlas uses a consistent presentation format throughout. The basic geometric measurements provided by the down-cavern surveys have also been used to generate a number of geometric attributes, the values of which have been mapped onto the geometric form of each cavern using a color-shading scheme. The intent of the various geometrical attributes is to highlight deviations of the cavern shape from the idealized cylindrical form of a carefully leached underground storage cavern in salt. The atlas format does not allow interpretation of such geometric deviations and anomalies. However, significant geometric anomalies, not directly related to the leaching history of the cavern, may provide insight into the internal structure of the relevant salt dome.
The reliability of thin film systems is important to the continued development of microelectronic and micro-electro-mechanical systems (MEMS). The reliability of these systems is often tied to the ability of the films to remain adhered to its substrate. By measuring the amount of energy to separate the film from the substrate, researchers can predicts film lifetimes. Recent work has resulted in several different testing techniques to measure this energy including spontaneous buckling, indentation induced delamination and four point bending. This report focuses on developing quantifiable adhesion measurements for multiple thin film systems used in MEMS and other thin film systems of interest to Sandia programs. First, methods of accurately assessing interfacial toughness using stressed overlayer methods are demonstrated using both the W/Si and Au/Si systems. For systems where fracture only occurs along the interface, such as Au/Si, the calculated fracture energies between different tests are identical if the energy put into the system is kept near the needed strain energy to cause delamination. When the energy in the system is greater than needed to cause delamination, calculated adhesion energies can increase by a factor of three due to plastic deformation. Dependence of calculated adhesion energies on applied energy in the system was also shown when comparisons of four point bending and stressed overlayer test methods were completed on Pt/Si systems. The fracture energies of Pt/Ti/SiO{sub 2} were studied using four-point bending and compressive overlayers. Varying the thickness of the Ti film from 2 to 17 nm in a Pt/Ti/SiO{sub 2} system, both test methods showed an increase of adhesion energy until the nominal Ti thickness was 12nm. Then the adhesion energy began to decrease. While the trends in toughness are similar, the magnitude of the toughness values measured between the test methods is not the same, demonstrating the difficulty in extracting mode I toughness as mixed mode loading approaches mode II conditions.
Robust and reliable quantitative proliferation assessment tools have the potential to contribute significantly to a strengthened nonproliferation regime and to the future deployment of nuclear fuel cycle technologies. Efforts to quantify proliferation resistance have thus far met with limited success due to the inherent subjectivity of the problem and interdependencies between attributes that lead to proliferation resistance. We suggest that these limitations flow substantially from weaknesses in the foundations of existing methodologies--the initial data inputs. In most existing methodologies, little consideration has been given to the utilization of varying types of inputs--particularly the mixing of subjective and objective data--or to identifying, understanding, and untangling relationships and dependencies between inputs. To address these concerns, a model set of inputs is suggested that could potentially be employed in multiple approaches. We present an input classification scheme and the initial results of testing for relationships between these inputs. We will discuss how classifying and testing the relationship between these inputs can help strengthen tools to assess the proliferation risk of nuclear fuel cycle processes, systems, and facilities.
An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in order to address difficult, real world challenges. While industrial reliance on electronic communications has become ubiquitous, empirical and theoretical understanding of the bounds of its effectiveness have been limited. Previous research using short-term, laboratory experiments have engaged small groups of students in answering questions irrelevant to an industrial setting. The current experiment extends current findings beyond the laboratory to larger groups of real-world employees addressing organization-relevant challenges over the course of four days. Findings are twofold. First, the data demonstrate that (for this design) individuals perform at least as well as groups in producing quantity of electronic ideas, regardless of brainstorming duration. However, when judged with respect to quality along three dimensions (originality, feasibility, and effectiveness), the individuals significantly (p<0.05) out performed the group working together. The theoretical and applied (e.g., cost effectiveness) implications of this finding are discussed. Second, the current experiment yielded several viable solutions to the wickedly difficult problem that was posed.