I found myself interested in the “rumble” between Augusto Lucarelli and Dr. Brian Anderson of Auburn University (not so far from where I teach) about numerical modelling, and the pros and cons of same. For this website, the timing is interesting, coming in the middle of my series on constitutive modelling (there’s more to come, Lord willing.)
You’d think that someone who a) has his terminal degree in Computational Engineering and b) teaches the geotechnical component of the civil engineering undergraduate program at the University of Tennessee at Chattanooga would be an enthusiastic proponent of the use of numerical models for the work. And they’re certainly valuable, but let me just lay out some observations about them in light of the discussion above:
- At the start mention was made of using things like SPT tests and Atterberg limits for various determinations. The truth of the matter is that use of lab and field data alike is frequently empirical in its application. If the empirical correlation is not applicable to a certain situation for whatever reason, or the soil data has problems, going from “closed form” (more about that later) solutions to numerical models isn’t going to fix problems of this kind, although it may be better at hiding their deficiencies.
- A distinction needs to be made between many of the models we developed in the past and those which are more commonly characterised as “numerical models” now. The USACOE program CBEAR was mentioned (although programs such as CWALSHT, MAGSETII, CSETT, CSANDSET and really Pile Buck’s SPW911 are similar in nature.) These are really computational aids for “closed form” solutions or similar methods. Today a numerical solution is generally finite element or finite difference, such as the COM624/LPILE family, WEAP/GRLWEAP or its inverse CAPWAP, in addition to the numerous finite element codes in use. The former are really automations of things engineers used to do completely by hand; the latter are beyond hand calculation. The advantage of the former is that, although it’s certainly possible to mess up the results with bad input, the possibility of computational error in the process is reduced. With the latter direct replication of the results with hand calculations or closed form solutions is impossible except for very simple cases, and it is here that uncritical acceptance of results is especially dangerous.
- It is certainly possible, and desirable during model development, to make a reasonable comparison between a “closed form” solution and one generated by a numerical model provided that the case furnished to the latter is simple enough for reasonable comparison. There are two problems here. The first is that many “modellers” do not make the effort to develop a model in this way by first starting with a simple case and then proceeding to a more complex one. The second is that many of the “closed form” solutions we have in geotechnical engineering are in reality empirical correlations with variable validity. An example of this are the static pile capacity equations, for which there is a proliferation of solutions. This indicates that the science is certainly “not settled,” and in any case runs into the strength vs. service problem (more about that shortly.) The program director for my PhD program said that a good analytical solution was better than a modelled one all other things equal (an example of such a comparison in a simpler case is here.) But finding such a “good analytical solution” in geotechincal engineering isn’t always easy.
- I think that the way we’ve taught civil engineering on the undergraduate level does not prepare students for a meaningful understanding of the way finite element and finite difference programs actually work. In some ways it’s more serious than that: the geotechnical component is the first place where students have to face two-dimensional stress distribution in a meaningful way, which has been a major challenge for me teaching these courses over the years. In this respect the way it’s done across the pond (you can see this in texts such as Verruijt and Tsytovich) is better, which may explain some of Dr. Lucarelli’s optimism about this topic. Compounding this problem is the fact that students aren’t required to write code as they used to (Python may change that or may not) which increases the “black box” aspect of packaged models. (In recent years, I find some of my students struggling with engineering spreadsheets…)
- Traditionally we’ve divided analytical solutions in geotechnical engineering into three parts: those based on elasticity, those based on plasticity theory and those based on consolidation theory. The advantage of models such as finite element ones is that they can handle more than one of these at a time. The disadvantage is that how these are handled depends upon the model used by the program, something the practicing engineer may or may not understand. For example, those programs which use Mohr-Coulomb theory implemented elasto-plastically use what is admittedly a crude model, but one which fits with a wider range of soils and is better integrated with our current testing scheme. A model used with parameters which have to be “massaged” to fit it is a model asking for trouble.
- The entrance of plasticity into models–unavoidable in geotechinical engineering–brings up the whole issue of path dependence, which in turn brings up uniqueness issues for the solution. These are unavoidable; attempts by engineers to produce the single “right” answer are doomed to failure. The happiest result we can hope for is the “best” solution. This pushes things toward a more probabilistic way. Since the advent of LRFD and other statistical methods there is a greater appreciation of this among engineers which may not be shared by clients or those in the legal profession.
- The topic of the overemphasis on strength loading (and the corresponding de-emphasis on service loading) is a critical one in geotechnical engineering. Although certain problems (such as slope stability and bearing capacity failure) are catastrophic in nature as they push the soil to the upper bound failure state, most geotechnical problems are with settlement. This is especially true with deep foundations, where service load failure is the greater danger. I know that Bengt Fellenius has beat on this topic for a long time and I attempt to emphasise this with my students as well.
- The tendency to throw the solution into the hands of “big data” or “machine learning” without using these tools to sharpen our predictive models is a scientific disaster waiting to happen, and that’s not only true of geotechnical engineering.
I am glad that DFI set this dialogue up, it was informative and timely. I am also glad the Equipment Corporation of America, who was Vulcan Iron Works‘ mid-Atlantic dealer for many years, has sponsored this effort.