Recent strategies for algae-based biofuels have primarily focused on biodiesel production by exploiting high algal lipid yields under nutrient stress conditions. However, under conditions supporting robust algal biomass accumulation, carbohydrate and proteins typically comprise up to ~80% of the ash-free dry weight of algae biomass. Therefore, comprehensive utilization of algal biomass for production of multipurpose intermediate- to high-value bio-based products will promote scale-up of algae production and processing to commodity volumes. Terpenes are hydrocarbon and hydrocarbon-like (C:O>10:1) compounds with high energy density, and are therefore potentially promising candidates for the next generation of value added bio-based chemicals and “drop-in” replacements for petroleum-based fuels. In this study, we demonstrated the feasibility of bioconversion of proteins into sesquiterpene compounds as well as comprehensive bioconversion of algal carbohydrates and proteins into biofuels. To achieve this, the mevalonate pathway was reconstructed into an E. coli chassis with six different terpene synthases (TSs). Strains containing the various TSs produced a spectrum of sesquiterpene compounds in minimal medium containing amino acids as the sole carbon source. The sesquiterpene production was optimized through three different regulation strategies using chamigrene synthase as an example. The highest total terpene titer reached 166 mg/L, and was achieved by applying a strategy to minimize mevalonate accumulation in vivo. The highest yields of total terpene were produced under reduced IPTG induction levels (0.25 mM), reduced induction temperature (25°C), and elevated substrate concentration (20 g/L amino acid mixture). A synthetic bioconversion consortium consisting of two engineering E. coli strains (DH1-TS and YH40-TS) with reconstructed terpene biosynthetic pathways was designed for comprehensive single-pot conversion of algal carbohydrates and proteins to sesquiterpenes. The consortium yielded the highest total terpene yields (187 mg/L) at an inoculum ratio 2:1 of strain YH40-TS: DH1-TS, corresponding to 31 mg fuel/g algae biomass ash free dry weight. This study therefore demonstrates a feasible process for comprehensive algal biofuel production.
The goal of this SAND report is to provide guidance for other groups hosting workshops and peerto-peer learning events at Sandia. Thus this SAND report provides detail about our team structure, how we brainstormed workshop topics and developed the workshop structure. A Workshop “Nuts and Bolts” section provides our timeline and check-list for workshop activities. The survey section provides examples of the questions we asked and how we adapted the workshop in response to the feedback.
The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.
This report discusses“Ion-Selective Ceramics for Waste Separations” which aims to develop an electrochemical approach to remove fission product waste (e.g., Cs+ ) from the LiCl-KCl molten salts used in the pyroprocessing of spent nuclear fuel.
The goal of this project is to generate 3D microstructural data by destructive and non-destructive means and provide accompanying characterization and quantitative analysis of such data. This work is a continuing part of a larger effort to relate material performance variability to microstructural variability. That larger effort is called “Predicting Performance Margins” or PPM. In conjunction with that overarching initiative, the RoboMET.3D™ is a specific asset of Center 1800 and is an automated serialsectioning system for destructive analysis of microstructure, which is called upon to provide direct customer support to 1800 and non-1800 customers. To that end, data collection, 3d reconstruction and analysis of typical and atypical microstructures have been pursued for the purposes of qualitative and quantitative characterization with a goal toward linking microstructural defects and/or microstructural features with mechanical response. Material systems examined in FY15 include precipitation hardened 17-4 steel, laser-welds of 304L stainless steel, thermal spray coatings of 304L and geological samples of sandstone.
The Kolsky compression bar, or split Hopkinson pressure bar (SHPB), is an ex- perimental apparatus used to obtain the stress-strain response of material specimens at strain rates in the order of 10 2 to 10 4 1/s. Its operation and associated data re- duction are based on principles of one-dimensional wave propagation in rods. Second order effects such as indentation of the bars by the specimen and wave dispersion in the bars, however, can significantly affect aspects of the measured material response. Finite element models of the experimental apparatus were used here to demonstrate these two effects. A procedure proposed by Safa and Gary (2010) to account for bar indentation was also evaluated and shown to improve the estimation of the strain in the bars significantly. The use of pulse shapers was also shown to alleviate the effects of wave dispersion. Combining the two can lead to more reliable results in Kolsky compression bar testing.
A new wind turbine blade has been designed for the National Rotor Testbed (NRT) project and for future experiments at the Scaled Wind Farm Technology (SWiFT) facility with a specific focus on scaled wakes. This report shows the aerodynamic design of new blades that can produce a wake that has similitude to utility scale blades despite the difference in size and location in the atmospheric boundary layer. Dimensionless quantities circulation, induction, thrust coefficient, and tip-speed-ratio were kept equal between rotor scales in region 2 of operation. The new NRT design matched the aerodynamic quantities of the most common wind turbine in the United States, the GE 1.5sle turbine with 37c model blades. The NRT blade design is presented along with its performance subject to the winds at SWiFT. The design requirements determined by the SWiFT experimental test campaign are shown to be met.
The application of peridynamics for engineering analysis requires an efficient and robust software implementation. Key elements include processing of the discretization, the proximity search for identification of pairwise interactions, evaluation of the con- stitutive model, application of a bond-damage law, and contact modeling. Additional requirements may arise from the choice of time integration scheme, for example esti- mation of the maximum stable time step for explicit schemes, and construction of the tangent stiffness matrix for many implicit approaches. This report summaries progress to date on the software implementation of the peridynamic theory of solid mechanics. Discussion is focused on parallel implementation of the meshfree discretization scheme of Silling and Askari [33] in three dimensions, although much of the discussion applies to computational peridynamics in general.
This initial work attempted to determine the feasibility of using advanced in-situ, electron tomography, and precession electron diffraction techniques to determine the structural evolution that occurs during advanced aging of Pd films with nanometer resolution. To date, significant progress has been made in studying the cavity structures in sputtered, evaporated, and pulsed-laser deposited Pd films that result from both the deposition parameters, as well as from He ion implantation. In addition, preliminary work has been done to determine the feasibility of performing precession electron diffraction (PED) and electron tomography in these type of systems. Significant future work is needed to determine the proper conditions such that relevant advanced aging protocols can be developed.
While the increased use of Commercial Off-The-Shelf information technology equipment has presented opportunities for improved cost effectiveness and flexibility, the corresponding loss of control over the product's development creates unique vulnerabilities and security concerns. Of particular interest is the possibility of a supply chain attack. A comprehensive model for the lifecycle of hardware and software products is proposed based on a survey of existing literature from academic, government, and industry sources. Seven major lifecycle stages are identified and defined: (1) Requirements, (2) Design, (3) Manufacturing for hardware and Development for software, (4) Testing, (5) Distribution, (6) Use and Maintenance, and (7) Disposal. The model is then applied to examine the risk of attacks at various stages of the lifecycle.
There has been a long history of considering Safety, Security, and Safeguards (3S) as three functions of nuclear security design and operations that need to be properly and collectively integrated with operations. This paper specifically considers how safety programmes can be extended directly to benefit security as part of an integrated facility management programme. The discussion will draw on experiences implementing such a programme at Sandia National Laboratories’ Annular Research Reactor Facility. While the paper focuses on nuclear facilities, similar ideas could be used to support security programmes at other types of high-consequence facilities and transportation activities.
Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithms on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.
On 6/26/2015 at 1445 in 894/136, a thermal battery (approximately the size of a commercial size C cell) experienced an unexpected failure following a routine test where the battery is activated. The failure occurred while a test operator was transferring the battery from the testing primary containment box to another containment box within the same room; initial indications are that the battery package ruptured after it went into thermal runaway which led to the operator receiving bruising to the palm of the hand from the pressure of the expulsion. The operator was wearing the prescribed PPE, which was safety glasses and a high temperature glove on the hand that was holding the battery.
Sandia journal manuscript; Not yet accepted for publication
Tarquini, Vinicio; Knighton, Talbot; Wu, Zhe; Huang, Jian; Pfeiffer, Loren; West, Ken; Reno, John L.
Quantum Hall measurements have been performed on high-mobility GaAs/AlGaAs (p-type) and (n-type) quantum wells using a Hall/Anti-Hall bar configuration having both inner and outer edges. The potential distribution and the current flow in the bulk can be controlled by the external magnetic field or the driving current. Extreme situations occur at the Quantum Hall states where the current, driven by leads connected to the outer edge, flows exclusively in one half of the sample. In these states, the chemical potential of the inner edge aligns itself with the edge at ground potential. Accumulation and depletion of carriers takes place at the edge whose carriers flow away from the current source.
Sandia journal manuscript; Not yet accepted for publication
Curtis, Jeremy A.; Tokumoto, Takahisa; Cherian, Judy G.; Kuno, J.; Reno, John L.; Mcgill, Stephen A.; Karaiskaj, Denis; Hilton, David J.
We have studied the cyclotron mobility of a Landau-quantized two-dimensional electron gas as a function of temperature (0.4 --100 K) at a fixed magnetic field (1.25 T) using terahertz time-domain spectroscopy in a sample with a low frequency mobility of μdc = 3.6 x 106 cm2 V-1 s-1 and a carrier concentration of ns = 2 x 106 cm-2. The low temperature mobility in this sample results from both impurity scattering and acoustic deformation potential scattering, with μ$-1\atop{CR}$ ≈ (2.1 x 105 cm2 V-1 s-1)-1 + (3.8 x 10-8 V sK-1 cm-2 x T)-1 at low temperatures. Above 50 K, the cyclotron oscillations show a strong reduction in both the oscillation amplitude and lifetime that is dominated by the contribution due to polar optical phonons. These results suggest that electron dephasing times as long as ~ 300 ps are possible even at this high lling factor (v = 6:6) in higher mobility samples (> 107 cm2 V-1 s-1) that have lower impurity concentrations and where the cyclotron mobility at this carrier concentration would be limited by acoustic deformation potential scattering.
Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includes both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.
The measurement of the radiation characteristics of an antenna on a near-field range requires that the antenna under test be located very close to the near-field probe. Although the direct coupling is utilized for characterizing the near field, this close proximity also presents the opportunity for significant undesired interactions (for example, reflections) to occur between the antenna and the near-field probe. When uncompensated, these additional interactions will introduce error into the measurement, increasing the uncertainty in the final gain pattern obtained through the near-field-to-far-field transformation. Quantifying this gain-uncertainty contribution requires quantifying the various additional interactions. A method incorporating spatial-frequency analysis is described which allows the dominant interaction contributions to be easily identified and quantified. In addition to identifying the additional antenna-to-probe interactions, the method also allows identification and quantification of interactions with other nearby objects within the measurement room. Because the method is a spatial-frequency method, wide-bandwidth data is not required, and it can be applied even when data is available at only a single temporal frequency. This feature ensures that the method can be applied to narrow-band antennas, where a similar time-domain analysis would not be possible.
Major exascale computing reports indicate a number of software challenges to meet the dramatic change of system architectures in near future. While several-orders-of-magnitude increase in parallelism is the most commonly cited of those, hurdles also include performance heterogeneity of compute nodes across the system, increased imbalance between computational capacity and I/O capabilities, frequent system interrupts, and complex hardware architectures. Asynchronous task-parallel programming models show a great promise in addressing these issues, but are not yet fully understood nor developed su ciently for computational science and engineering application codes. We address these knowledge gaps through quantitative and qualitative exploration of leading candidate solutions in the context of engineering applications at Sandia. In this poster, we evaluate MiniAero code ported to three leading candidate programming models (Charm++, Legion and UINTAH) to examine the feasibility of these models that permits insertion of new programming model elements into an existing code base.
The Database Performance Monitoring (DPM) software (copyright in processes) is being developed at Sandia National Laboratories to perform quality control analysis on time series data. The software loads time indexed databases (currently csv format), performs a series of quality control tests defined by the user, and creates reports which include summary statistics, tables, and graphics. DPM can be setup to run on an automated schedule defined by the user. For example, the software can be run once per day to analyze data collected on the previous day. HTML formatted reports can be sent via email or hosted on a website. To compare performance of several databases, summary statistics and graphics can be gathered in a dashboard view which links to detailed reporting information for each database. The software can be customized for specific applications.
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.