Fuel cells are touted to play an important role in the hydrogen based economy that is being considered world over. While it has been amply demonstrated that fuel cells work and are technically feasible, two of the main challenges for their rapid commercialization are cost and reliability. All of our work in the area of fuel cells in some sense or the other addresses these two concerns. We have developed detailed phenomenological steady-state and dynamic models for PEMFC. Optimization studies using these models provided several interesting insights in terms of improving the performance of the fuel cell and possibilities for reducing the amount of platinum (the main cost contributor). We have also made an interesting new PEMFC design. While most current PEMFC designs are planar, we have developed a tubular PEMFC design  that does not need graphite bipolar plates (another cost and weight contributor). We have demonstrated that we can achieve the same power densities as the planar cell at about one-tenth the weight leading to extremely favorable gravimetric power densities. We also believe that the water management properties of this cell are much better due to the more open structure for reactant distribution compared to the narrow groove designs in the planar cell. While the ideas discussed above fundamentally change the materials and designs used, we are also in the process of generating interesting results in the systems engineering aspects of fuel cell operations. One of these is online noninvasive diagnostics of fuel cell systems. The current approach for diagnostics is the use of Electrochemical Impedance Spectroscopy (EIS) for finding markers for specific failures. While this seems to work, a complete EIS analysis would take about a minimum of an hour and the interpretation of the EIS results are nontrivial. In contrast, we have been trying to identify markers based on transient data. Our view is that incipient diagnostics of operational stacks is extremely crucial and this diagnostics information can be used in adapting the control for increasing the lifetime performance of the stack. Other than control, faster testing also has tremendous cost benefit and can be used ultimately in accelerated testing.
In the area of SOFCs, we have developed phenomenological steady-state and dynamic models for an anode-supported tubular SOFC. We performed optimization studies based on this model and showed that about 30% performance improvement can be achieved from the base case design that our industrial sponsor was using. We have also developed extremely efficient nonlinear multivariable controllers for SOFC that can be developed as “controllers-on-a-chip” for use with SOFC stacks.
Lab-on-chip devices containing micrometer-scale channels and integrated with on-chip heaters, mixers and detection elements are emerging as viable solutions for biochemical analysis, material synthesis and biosensing. In recent years there has been a surge in the use of discrete microfluidic systems, where analysis is performed in individual immiscible plugs or droplets. This recent growth has been fuelled by the potential for extreme throughput offered by droplet-based microfluidics. However, fundamental challenges exist to transform current droplet-based devices to massively parallelized fluidic processors.
The key scientific challenge in the realization of such massively parallelized fluidic processors lies in controlling the transport of large numbers of droplets through a network of channels, akin to controlling car traffic on congested highways. This traffic problem arises due to the fact that the presence of a moving confined droplet in a microchannel changes the hydrodynamic resistance to fluid flow in that channel. As a result, the path that a droplet chooses in a network of connected channels depends on the variations in the hydrodynamic resistance of the network introduced by neighboring droplets. Models elucidating the fundamental mechanisms that contribute to the spatiotemporal behavior of droplets in a loop are also beginning to emerge. Yet, the pace of development of droplet-based fluidic processors has been slow due to the lack of a rational framework to design these non-linear systems that allow full control of the position and timing of droplets in an interconnected network. The principal hypothesis of this work is that computational approaches can shift this imbalance by accelerating the search for functional processor designs.
Three major elements comprise our computational thinking approach - (i) Predictive models of the basic fluidic components of a processor (e.g. droplet motion in a microchannel; droplet behavior at a bifurcation and a bypass) (ii) Predictive control strategies to regulate droplet traffic in the functional microfluidic processor designs (iii) Specialized machine learning tools to optimize network and device architectures for desired processor functionality. Harnessing this computation thinking strategy, we plan to address scientific discovery questions such as: (i) Given a desired functionality (e.g. sorting), are there multiple designs that can achieve this functionality and conversely, can multiple functionalities be achieved by a single device design (e.g. sorting and merging)? (ii) How robust are the designs to fabrication errors (e.g. channel dimension tolerances) and fluctuations in the non-linear dynamics of droplets? (iii) To what degree is active control essential in the operation these devices?
Fault Detection and Diagnosis (FDD) is the problem of identifying the root cause of failures in process systems given sensor measurements and some form of a process model. This is an important problem with an economic impact of several billion dollars every year in the petrochemical industry alone. This has resulted in extensive research efforts focused on this problem. Our research group looks at several different approaches to solving this problem, which are classified as either model based or data based approaches. Qualitative trend analysis (QTA) is one of the data-based approaches that we have developed to solve this problem. The qualitative trend analysis (QTA) approach was benchmarked in an industrial facility as a part of the Abnormal Situation Management consortium. Our more recent work in this area focuses on model based approaches for solving fault diagnosis problems in nonlinear systems. We have developed a new nonlinear feedback observer structure that results in a diagonal relationship between the faults and residuals. This diagonal relationship is proved through a Lyapunov analysis. A partial set of necessary and sufficiency conditions have also been derived. This extends the nonlinear model based approaches for fault diagnosis to a wider class of nonlinear systems where the faults affect the states nonlinearly. Till now the most general solutions have been restricted to systems where the faults enter the state equations in the so called “fault-affine” form. Further industrial implementation and validation of these techniques will be a strong future focus. There are several interesting extensions to the nonlinear observer work. Our observer work is restricted to the detection of abrupt faults in systems where all the states are measured. The solution procedure needs to be generalized to address both non-abrupt faults and when all the states are not measured. Further, a complete set of necessary and sufficiency conditions have to be derived for the observer structure. Since our approach works with realistic chemical engineering models it might be possible to develop this approach around commercial simulation software. This should also help in industrial deployment of the work.
Most of the work on fault diagnostic methods assumes that a set of sensors are available for analysis and proceed with the development of diagnostic techniques. A more fundamental problem is one of deciding the placement of sensors that will provide maximum discriminatory information for fault diagnosis. Our group provided a solution to this problem using graph theory and integer linear programming concepts. We demonstrated how this approach could be used with several model forms in a series of publications. We further extended this approach to include network reliability concepts and retrofit design. Our recent work in this area addresses the problem of calculating the value of the sensor network from a fault diagnosis viewpoint. This allows the value to be monetized so that several sensor network designs can be compared on an ‘apple-to-apple’ basis. Further sensor placement in large-scale energy systems is a current research focus pursued largely through funding from DOE, USA. Future work will also include applying the concepts developed here in newer areas such as Systems Biology. The study of gene regulatory networks (GRNs) is a significant problem in systems biology. Of particular interest is the problem of determining the unknown or hidden higher level regulatory signals by using gene expression data from DNA microarray experiments. Several studies in this area have demonstrated the critical aspect of the network structure in tackling the network modeling problem. Extending our sensor network design work, we applied our distinguishability analysis algorithms for the problem of GRN analysis. Structural analysis of systems has proved useful in a number of contexts, viz., observability, controllability, fault diagnosis, sparse matrix computations etc. In this work, we formally defined structural properties that are relevant to Gene Regulatory Networks. We explored the structural implications of certain quantitative methods and explained completely the connections between the identifiability conditions and structural criteria of observability and distinguishability. We illustrated these concepts in case studies using representative biologically motivated network examples. This work bridges the quantitative modeling methods with those based on the structural analysis.
Controllers work well when they are initially deployed. However, over time, the performance of the controllers starts deteriorating for several reasons. Controller Performance Assessment (CPA) is the task of identifying the controllers that are working poorly. Control loop problems some times lead to oscillations in the process variables. It is well documented that about a third of the oscillating loops are due to valve nonlinearities. The problem of root cause analysis in oscillating loops is one of identifying the reason for oscillation so that corrective action could be taken. The reason for oscillation in each individual control loop can be categorized as due to: poor control tuning, valve nonlinearities or oscillations induced by oscillations external to the loop. Nearly one-third of all the oscillating control loops are a result of stiction (static friction) in control valves.
The research work of our group spans several different facets of the problem such as: oscillation characterization, stiction modeling, stiction detection and quantification, stiction compensation, theoretical stiction identifiability analysis, root cause analysis in linear control loops, development of delay-free performance metrics for SISO and MIMO systems, and integrating automated controller retuning with CPA. We have developed shape based and system identification based techniques for stiction detection. A novel two move stiction compensation technique was also developed and demonstrated on experimental systems, resulting in an international patent. The oscillation characterization approach has been benchmarked on thousands of industrial control loops. Performance assessment and diagnosis of MPC controllers working in conjunction with Real Time Optimization (RTO) systems will be an area for future work.
In any modern chemical plant, the reliability of data used for process monitoring and control has a major impact on process efficiency and product quality. State and parameter estimation deals with the problem of obtaining accurate estimates of process variables from measured data, given a consistent process model. A landmark achievement in state estimation for linear dynamic systems was the development of the Kalman filter. For linear dynamic systems, the Kalman Filter (KF) gives optimal estimates in the presence of measurement and state uncertainties. For nonlinear systems, Extended Kalman Filters (EKF) have been developed, which are based on linearizing the nonlinear equations and applying the Kalman filter update equations to the linearized system. While the EKF is an efficient estimator because of its recursive nature, the EKF does not take into account bounds and other algebraic constraints on the estimates and can therefore give rise to infeasible estimates. An alternate class of methods for state and parameter estimation, especially for nonlinear dynamic systems, is the moving horizon optimization based techniques. While this approach allows for the inclusion of constraints, they are computationally demanding due to their non-recursive form, thus raising real-time implementation concerns. Our solution to this problem allows for the inclusion of constraints while retaining the recursive formulation that leads to efficient computational performance. In the absence of constraints and for linear processes, our solution can be shown to reduce to the Kalman solution. For nonlinear processes, the need to linearize the model equations for the calculation of the Kalman gain matrix is obviated. We have also extended this approach using unscented transformation for better approximation of the first and second order moments. Our recent work is on the development of a Receding-horizon Nonlinear Kalman (RNK) filter for state and parameter estimation.