Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations cowski on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Soil strenght under a foundation

Status
Not open for further replies.

Mccoy

Geotechnical
Nov 9, 2000
907
How to figure out a representative soil strenght for foundations design looks like a simple task but sometimes it is'nt. Expecially when we start to talk about averaging.

This is what dgillette wrote (see thread "anyone near detroit?":
dgillette said:
I would like to caution you about using statistical analysis of soil
properties to infer the mechanical properties of the soil mass on a gross
scale. This is a topic that has occupied much of my attention for the last
25 years.
For a compressibility problem such as settlement of a building, the
vertically averaged stiffness can be a pretty good representation.
However, for strength problems, it is the weakest link in the chain (the
weakest layer) that governs. What's needed is an average over a potential
failure plane for a slope or foundation. Vertical averaging can be
dangerously unconservative.

I would like dgillette to expand his views. Particularly, to me it would seem that in settlements problems, simple averaging (even geometric or harmonic) woudn't be a good idea, unless it's done over short lenghts, because the soil close to the foundation "sees" the load much more than the soil further down (to this regard soil is a little "near-sighted"). Hence the necessity to split the succession into thin layers (the Schmertmann's method).
As to failure problems, I agree that arithmetic average is sensitive to larger values, and that a failure surface, being lazy by nature (minimizing work, or maximizing entropy) would rather avoid stiffer layers. As GAfenton pointed out in an archive thread (256-96409), geometric or harmonic average would better reflect the laws of physics applied to failure surface propagation in soils. Nevertheless, if I have a vertical ECPT sounding below or close to a foundation, and if I assume there are no lateral soil variations, I would believe its data, properly treated, would be reasonably representative of real soil behaviour. Depth of representativeness should of course be chosen with discrimination. I favour 1 to 2+ times footing width, according to the presence of stiffer layers (which failure surfaces would like to steer clear of as much as possible). In average foundation project, measuring soil properties over the whole, potential failure surface, would be big probs (think about a slab foundation and how many field or lab tests we would need).
 
Replies continue below

Recommended for you

Mc Coy:

I tend to agree with the statement made by dgilette that one needs to exercise caution when using statistical methods
to infer mechanical proerties of soil.

While I have no total disagreement with the use of statistical methods, I would like to state that the elegance of mathematical expositions should not replace judgement but rather supplement same.

I fear that the thrust toward obtaining a set value or range of values from a data set whether field or lab that we can all agree on unequivocally will somehow relegate us to dealing with soils in an abstract manner. Not for everyone though. For those that lack the experience that is much needed, there could be problems. In some instances this lack of experience could result in costly designs and in others, in producing unsafe designs.

The issue on the failure strength etc is not one only related to doing a test near to where the foundation is but in taking those test numbers and transforming them through some relationship (and they are many) to a value or values that is required as input in another relationship to give us, in the case of a foundation problem, an allowable or ultimate bearing capacity.

Here is where the problem lies. If we invoke a relationship to use for bearing capacity then we know that the soil when under a load moves downwards and outwards as shown by the classical soil wedges under and adjacent to the foundation -the punch effect in metal plasticity- from which our classical bearing capacity forulation was initially derived.

For realistic values we should perhaps treat the problem with different values of c and phi since we have compression under the footing and shear, etc in the outer zones. This situation is not modelled by the CPT test but rather we have to take the numbers from this test and use some correlations etc. to obtain such values.

However, for our classical bearing capacity equation we cannot incorporate these varying values below and adjacent to the foundation at the same time, unless we use say a finite element model which we can apply values to various discrete elements. Even the finite element model depends on our choice of input parameters. If we can apply a spectrum of numbers we can hopefully get a variety of answers. However, in the end judgement in choosing theb numbers play an important role based on our individual experiences.

We must be careful in the use of statistical methods that we also are aware of how failure mechanisms are generated since obtaining numbers irrespective of how elegant the approach may be has to be tied in with our concept of how a foundation (deep, shallow etc) is expected to behave under an applied load before we can even attempt to decide on representative numbers. This are is still debatable.

The state of practice therefore is one that the choice of values will perhaps always reside with the individual who is undertaking the problem. Hence, no two persons will choose the same values or the same equations for a given problem except perhaps in undergraduate work.

In general, we tend to look at the spread of values, invoke our understanding of behaviour and generally two of us would come up with values that are generally within a range within which the values that we have calculated separately somehow fit. In the end we also are at odds with the recommendation of a final value. Here is where I think the statistical approach can be of value when combined with the risk factor for the particular project.

In general, I see the geotechnical problem solution being the consumption of the project by the geo engineer, his choice of testing, his choice of numbers, his choice of equations, and then finally some approach whereby his judgement can be supplemented by perhaps the judicious use of statistical or other methods, and finally his judgement prevails in the end.

The same probably is true in the medical profession where unknowns have to be dealt with.

 
I'll get back to you, but probably not until tomorrow or the next day. Got some fires to put out.

DRG
 
I agree with a lot of the geophilosophical things VAD says above, about judgment vs statistical analysis. I'd have more faith in your judgment about good design properties than I do in statistical analysis.

For the 5th percentile strength estimation, you would be seeking the degree-of-belief 5th percentile of the average over the failure surface, rather than the statistical 5th percentile of the set of data. Betraying my vast ignorance of statistical methods, I have to say I don't know how to calculate that.

To help illustrate what I'm trying to say, I had Excel give me 200 random numbers between 0 and 100, then I broke them into groups of 10 and found their means. The standard deviation of the 20 means was about 8 vs about 29 for the raw data. The 5th percentile on the whole data set would be about 5, vs about 40 for the 5th percentile of the 20 means. Obviously, it would be silly to design for 5 if the mean is quite unlikely to be much less than 40. Ignoring any systematic bias from interpretation of actual strength values from your CPT or SPT data (along with the issue of the lazy failure mechanism), your design strength should be much closer to the mean than to the 5th percentile of the data. If you were merely looking at the profile of interpreted strengths, I think your non-statistical judgment would be to pick a number that is somewhat lower than the mean to allow for the uncertainty in the measurements, the average of the strength data vs. average strength along the shear surface, etc.

Gotta go home for dinner now.

Best regards.
DRG
 
I did my undergrad work under Dr. Fang at Lehigh. He addressed the question of how to determine bearing capacityfor a varied cohesionless soil. We were given the method, and I don,t recall the source, I believe it was based on some analysis, and considerable experience. Anyway what we were taught was to re average the blow counts for each new spt data point. For example suppose you had 8 Blows @ 5', 6 blows @ 10 ', 3 blows @ 15 feet and 9 blows @ 20 ft. Thus at 5 ft the ave is 8. At 10 ft 8+6/2 =7. At 10 ft the ave is 8+6+3/3= 5.7 and at 20 ft the ave. is 8+6+3+9/4 = 6.5. Thus the blow count for design would be 5.7 (which could be rounded to 6). This method can be adjusted to use weighted values were sample intervals are uneven. The beauty of it is is that A0 it is simple B) it accounts for the location of weak layers. A weak layer closer to the foundation will have a greater effect than a weaker layer at depth. I have not seen this in texts, but have used it over the years and have felt comfortable with it.
 
Vad said:
I would like to state that the elegance of mathematical expositions should not replace judgement but rather supplement same

True! Sometimes mathematics and statistics yield non sensical results, and by themselves they are worthwile. Output values should always be filtered by experience and judgment.

dGillette said:
your design strength should be much closer to the mean than to the 5th percentile of the data. If you were merely looking at the profile of interpreted strengths, I think your non-statistical judgment would be to pick a number that is somewhat lower than the mean to allow for the uncertainty in the measurements, the average of the strength data vs. average strength along the shear surface, etc.
As a matter of fact, it appears that Niels Ovesen, the former EC7 commission chairman, by 5th percentile meant 5th percentile of the distribution of the mean, not of the sample distribution. This would put the characteristic value closer to the mean ("a cautious estimate of the mean value"...). Providing we have enough data, that would be a Student distribution.
Some authors (Lo) and some standards (NORSOK the Norwegian standard on oil platforms), add that when the soil volume interested by the failure surface is large, then we should adopt the 5th %-ile of the distribution of the mean (pretty close to the data mean), but if the surface failure is not large, then we should use the 5th%-ile of the sample distribution (farther to the mean). It beats me what's a small volume. They say failure in pile tips. What about small foundations? This because in large volumes fluctuations tend to average out, in small volumes less so.
All the above to comply with the Eurocode 7- In an year it will become THE LAW in Italy!!-.
The estimate should not be too close to the mean, because the design approach by the Italian specs may yield too unconservative results. Calibration is the weak link of EC7.
Outside of Europe, I think a specific study should be done on every occasion, old ways versus LRFD, and behave accordingly. Geometric mean, as Gordon Fenton suggests, could be the best choice in foundations failure problems. Even harmonic mean, in some contexts, which sees most the weakest layers.

DRC1,
it's the 1st time I hear about the Fang method you recall, it seems sort of a spatial mean with an expanding backward window, with the effect that you mention, of being more sensitive to uppermost data.
I Made this experiment, and please correct me if I didn't understand the method perfectly:
4 equal thickness layers, 2 models, from shallowest to deepest, blowcounts:
a): 4,6,8,10 becomes: 4,5,6,7 succession mean: 5.5
b):10,8,6,4 becomes: 10,9,8,7 succession mean: 8.5
simple arithmetic mean would be 7 for both.
geometric and harmonic means would be 6.6 and 6.2, again equal for both, since they are space-invariant.
Method definitively sensitive to uppermost layers.
It would be very interesting to know the analytical source, probably, as far as I can see, the logic would be increasing energy dissipation in the lower layers, so their contribution to the overall resistance is lesser. Stress field of course would be greater near the foundation base...
 
Mccoy

Blow counts for a) and b) are correct. The part I may not have been clear about was that 4 and 7(the lowest values) would be used foe blow counts in a) and b) respectively to compute bearing capacity. No further averaging is used.

I don't know the analytical basis for the method.Ibelieve the method is based on the fact that classical bearing capacity anysis is based on homogenous soil. A weaker soil will have a shallower failure surface than a stronger soil. A low blow count near the top will have a considerable influence on the capacity of the soil. the same blow count at depth below stronger soil will have an effect, but not as great.
 
DRC1,

This prinicple is implicit in the shear depth formula:

D=0.5*B*tan(45+phi/2)

Presumes that phi varies proportionally or nearly proportionally with N. Phi is assumed for initial estimate of D. Phi is then computed as a weighted average [of tan(phi)] over D and the formula is iterated until convergence on both D and tan(phi). I imagine that something similar could be worked out for N rather than phi on a site-specific basis.

Jeff
 
This has been a very interesting discussion.

Mccoy, good topic and discussion. One comment on your first post where you mention the “lazy nature” of soils. While the soils may be lazy, a thin weak layer may not have that much impact on allowable bearing pressure since as the shape of the failure surface departs from the classic form it becomes less efficient. I think this has probably saved a lot footings over the years.

VAD, a good discussion of just how important judgment is to the geotechnical profession. The less judgment that we express and use, the less of an engineer we are and the more our profession becomes a commodity.

dgillette, I really hate to admit it; but I’m not getting your discussion of the 5th percentile. Could anyone who gets it, restate it for me? Don’t know why I’m not getting it.

DRC1, I also have not heard of this method of determining a design strength. However, it is similar to what I have done for years.

Now, for what they are worth, my comments on the original question.

First, I would like to point out that for all projects the amount of soil directly investigated is very very small in comparison to the amount of soil that will be effected by the proposed construction. This is true both vertically (unless you take continuous tests, i.e. CPT) and even more so horizontally. Then you have the issue of the quality and type of data collected. You are not measuring phi and c directly.

Second, you take the field data and hopefully some laboratory data, and start to develop design strengths. No matter the method you use in this step, you can count of being no more accurate than the data collected in the first step. As an aside, there is nothing that says you have to make the same strata and property assumptions for each of the different types of analysis that you perform; this sometimes makes it easier to assign properties to different strata since what is slightly conservative for one analysis may be very unconservative for another.

Back to the topic at hand, what I then do is make an estimate of the largest and smallest footing that I anticipate for the area being analyzed. I then develop a design strength for each by taking twice the numerical average over 1B of depth plus the average between 1B and 2B, i.e. I double weight the soil in the 1B range. Since most of the data that I review is developed from SPT’s, pocket penetrometers, qu tests, and Q tests; I also weight the data according to source inversely to this order.

The results are then reviewed in light of the proposed construction, anticipated loads, sensitivity to settlement, etc. and specific recommendations provided in the geotechnical report for the project.
 
McCoy:

I am curious to know what is becoming Law in Italy. Would the law apply to the way engineers analyze their data and undertake their design, and as well as to the way Clients are required to pay the Engineers to obtain the prescribed data.

It would be nice to know if the law would ensure that Clients are aware of the requirements for such and such a job and that they have to comply with the mandatory geotechnical practice requirements. If this is implemented then it will certainly be a milestone for geotechnical engineering.


 
Sorry but I just realized I did some grammatical, or worse, semantic mistakes in some preceding posts. Is there no way to correct posts?
I said data are worthwile by themselves, but I was meaning worthless. Just the opposite. Hope it was evident from the context. My being Italian is not excuse enough, since I've been studying the English (American) language for a pretty long while, now...


To answer to Vad, yes, the law assigns some responsability to the clients as well, but not enough to make it a milestone. Clients will choose, jointly with the engineer, the class of importance of the project, wether to factor in the soil inertial effects(!!!), and other details which of course are going to influence the engineers' behaviour. The choice of the extent and type of site investigation is left totally to the engineer(s), though. In small projects and known areas design can be based on experience and available (literature) data, under the engineer's full responsibility. Usually local codes will require, in vulnerable areas, a minimum number of field tests or soundings. I do not believe payment habits will be affected. The construction industry is highly competitive over here, lots of engineers above all structurals. Geologists will often take care of site investigation and foundation analysis (not design). Besides, there are technical school graduates which can design masonry and will take care of the architectural side, and of course architects, who may design RC structures but almost invariantly will only take care of architectural design, exterior plus interior, leaving to structural engineers the RC design and analysis.
RC prevails, masonry is second, last and left behind is wood.

About the 5th percentile, dgillette at first mentioned degree of belief, I think he was speaking about imprecise probabilities and unconventional statistical theories.
Then he was referring to a simple concept: say you have an electric CPT and measure tip resistance every centimeter. You have say 3 MPa average, 0.6 standard deviation. If you average the measurements every 10 centimeters, the average of averages will still be 3 MPa, but the standard deviation will drop, because the signal has already been smoothed by the prior spatial averaging every 10 data.

I've prepared simple spreadsheets to measure EC7 characteristic values, or 5th percentiles, from various contexts.

I introduced 6 hypothetical data from field vane, Su in KPa:{30,30,40,40,50,60}.
Assuming a normal distribution, 5th percentile of the sample will be 22.4, whereas the 5th percentile of the distribution of the mean will be 31.1. The mean itself is 41.7, COV is about 25 %.
since the distribution of the mean is a Student distribution, it depends strongly on the numbers of data. With data > 30, and not excessive variability, the 5th %-ile of the mean will be pretty close to the mean itself.

In fact, doubling the above series: :{30,30,40,40,50,60, 30,30,40,40,50,60} we still have the same mean and variability, but 5th percentile (of the mean) becomes 35.6, closer to the mean itself, whereas 5th percentile of data increases only a little: 23.3, due to a small decrease in variability.

By the way, the geometric mean of the second set of data is 40.4, which is very close to the mean, in fact, a tad too close to be reasonably cautious. Harmonic mean is 39, again, pretty close to the mean. Professor Fenton's suggestions here would turn out to be only minimally conservative.

According to Eurocode7, then, to the above Su data a safety factor of 1.25 should be applied, so from a mean Su of 41.7 I'll get a characteristic value (5th percentile) of 35.6 and, finally, a design value of 25.5.
This design value shall be used with no further application of safety values. That is, my design resistance will be an ultimate bearing capacity: about 25.5 * 5.14 + q, and shall be directly compared to (conservative values) of loads.

GeoPaveTraffic, I hope I didn't make things more obscure. Unfortunately, European codes have evolved differently from American LRFD.
 
Thanks Mccoy. Seems an approach in the right direction. Any such start is good.
 
Question to McCoy. In your field vane averaging and fifth %tile - did you take into account Bjerrum's correction factor (as given by him - or modified by Bowles according to Aas)? Depending on the Index of Plasticity, you will apply corrections to the vane values to get "real" field values. If this is done, then your values of 30 become (say for a PI of 30%) about 0.9*30=27. Thus,your design values would fall to something like 23? Then, of course, if you have a couple of values where the PI = 40, the correction factor would be 0.8 by Bjerrum and the true design value would become something like between 20.5 to 23 kPa??

I'd love to see your 'calculations' on your example. Still, I also get a bit unnerved when I see people using undrained shear strength values to an accuracy of 0.5kPa (about 10psf) - I actually wonder when I see values used that are more accurate than about 50psf (2.5kPa) - is our rough field vane - or other means of measurement that accurate? Good thread.
 
BigH,
sorry I unnerved you by that 25.5 Su value, that wasn't on purpose [angel][angel][angel]!!

I would just remind you that's a design value, not the measured value, the latter should of course be rounded off to avoid unrealistic instrumental precision.
Field vane has a very good repeatability, in their reference article Kulhawy & Trautmann, 1996, give a 14% coefficient of variation, instrumental error plus procedure error. that is, if you do 100 tests on the same, homogeneous clay at approximately the same depth, 68.3% of data would fall within the range of 86 to 114 KPa (or 85 to 115, if you like it better rounded!)
In the above example, since figures were made up, no Bjerrum correction was applied. In real life, if you see it fit to correct you should do it before treating the data statistically (unless correction is the same for all data), and yes, the output would drop approximatively by the amount you say.
If you are interested to the spreadsheets, I'll turn them into english and pass them to Slide Rule Era, I'm preparing this course on the new Italian codes and the spreadsheets cover treatment of data coming from various sources (CPTs, SPT, continuos penetration testing, lab tests)
I could also use critical comments.
Two interesting aspects are dynmic tests, which need a variance correction factor (depending on soil strenght scale of fluctuation) and lab tests, where phi and c' are usually negatively correlated (assumption of bivariate normal distribution), so if you produce a conservative estimate of phi, c' will not vary proportionally, it may even increase!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor