Experiments offer incredible value to science, but results must always come with an uncertainty quantification to be meaningful. This requires grappling with sources of uncertainty and how to reduce them. In wind energy, field experiments are sometimes conducted with a control and treatment. In this scenario uncertainty due to bias errors can often be neglected as they impact both control and treatment approximately equally. However, uncertainty due to random errors propagates such that the uncertainty in the difference between the control and treatment is always larger than the random uncertainty in the individual measurements if the sources are uncorrelated. As random uncertainties are usually reduced with additional measurements, there is a need to know the minimum duration of an experiment required to reach acceptable levels of uncertainty. We present a general method to simulate a proposed experiment, calculate uncertainties, and determine both the measurement duration and the experiment duration required to produce statistically significant and converged results. The method is then demonstrated as a case study with a virtual experiment that uses real-world wind resource data and several simulated tip extensions to parameterize results by the expected difference in power. With the method demonstrated herein, experiments can be better planned by accounting for specific details such as controller switching schedules, wind statistics, and postprocess binning procedures such that their impacts on uncertainty can be predicted and the measurement duration needed to achieve statistically significant and converged results can be determined before the experiment.
This paper describes the methodology of designing a replacement blade tip and winglet for a wind turbine blade to demonstrate the potential of additive-manufacturing for wind energy. The team will later field-demonstrate this additive-manufactured, system-integrated tip (AMSIT) on a wind turbine. The blade tip aims to reduce the cost of wind energy by improving aerodynamic performance and reliability, while reducing transportation costs. This paper focuses on the design and modeling of a winglet for increased power production while maintaining acceptable structural loads of the original Vestas V27 blade design. A free-wake vortex model, WindDVE, was used for the winglet design analysis. A summary of the aerodynamic design process is presented along with a case study of a specific design.
The novel Hydromine harvests energy from flowing water with no external moving parts, resulting in a robust system with minimal environmental impact. Here two deployment scenarios are considered: an offshore floating platform configuration to capture energy from relatively steady ocean currents at megawatt-scale, and a river-based system at kilowatt-scale mounted on a pylon. Hydrodynamic and techno-economic models are developed. The hydrodynamic models are used to maximize the efficiency of the power conversion. The techno-economic models optimize the system size and layout and ultimately seek to minimize the levelized-cost-of-electricity produced. Parametric and sensitivity analyses are performed on the models to optimize performance and reduce costs.
Turbine generator power from simulations using Actuator Line Models and Actuator Disk Models with a Filtered Lifting Line Correction are compared to field data of a V27 turbine. Preliminary results of the wake characteristics are also presented. Turbine quantities of interest from traditional ALM and ADM with the Gaussian kernel (ϵ) set at the optimum value for matching power production and that resolve the kernel at all mesh sizes are also presented. The atmospheric boundary layer is simulated using Nalu-Wind, a Large Eddy Simulation code which is part of the ExaWind code suite. The effect of mesh resolution on quantities of interest is also examined.
The complexity and associated uncertainties involved with atmospheric-turbine-wake interactions produce challenges for accurate wind farm predictions of generator power and other important quantities of interest (QoIs), even with state-of-the-art high-fidelity atmospheric and turbine models. A comprehensive computational study was undertaken with consideration of simulation methodology, parameter selection, and mesh refinement on atmospheric, turbine, and wake QoIs to identify capability gaps in the validation process. For neutral atmospheric boundary layer conditions, the massively parallel large eddy simulation (LES) code Nalu-Wind was used to produce high-fidelity computations for experimental validation using high-quality meteorological, turbine, and wake measurement data collected at the Department of Energy/Sandia National Laboratories Scaled Wind Farm Technology (SWiFT) facility located at Texas Tech University's National Wind Institute. The wake analysis showed the simulated lidar model implemented in Nalu-Wind was successful at capturing wake profile trends observed in the experimental lidar data.
Organic materials are an attractive choice for structural components due to their light weight and versatility. However, because they decompose at low temperatures relative to tradiational materials they pose a safety risk due to fire and loss of structural integrity. To quantify this risk, analysts use chemical kinetics models to describe the material pyrolysis and oxidation using thermogravimetric analysis. This process requires the calibration of many model parameters to closely match experimental data. Previous efforts in this field have largely been limited to finding a single best-fit set of parameters even though the experimental data may be very noisy. Furthermore the chemical kinetics models are often simplified representations of the true de- composition process. The simplification induces model-form errors that the fitting process cannot capture. In this work we propose a methodology for calibrating decomposition models to thermogravimetric analysis data that accounts for uncertainty in the model-form and experimental data simultaneously. The methodology is applied to the decomposition of a carbon fiber epoxy composite with a three-stage reaction network and Arrhenius kinetics. The results show a good overlap between the model predictions and thermogravimetric analysis data. Uncertainty bounds capture devia- tions of the model from the data. The calibrated parameter distributions are also presented. In conclusion, the distributions may be used in forward propagation of uncertainty in models that leverage this material.
The prevalent use of organic materials in manufacturing is a fire safety concern, and motivates the need for predictive thermal decomposition models. A critical component of predictive modeling is numerical inference of kinetic parameters from bench scale data. Currently, an active area of computational pyrolysis research focuses on identifying efficient, robust methods for optimization. This paper demonstrates that kinetic parameter calibration problems can successfully be solved using classical gradient-based optimization. We explore calibration examples that exhibit characteristics of concern: high nonlinearity, high dimensionality, complicated schemes, overlapping reactions, noisy data, and poor initial guesses. The examples demonstrate that a simple, non-invasive change to the problem formulation can simultaneously avoid local minima, avoid computation of derivative matrices, achieve a computational efficiency speedup of 10x, and make optimization robust to perturbations of parameter components. Techniques from the mathematical optimization and inverse problem communities are employed. By re-examining gradient-based algorithms, we highlight opportunities to develop kinetic parameter calibration methods that should outperform current methods.