The mean width of random polytopes circumscribed around a convex body

Let K be a d-dimensional convex body, and let $K^{(n)}$ be the intersection of n halfspaces containing $K$ whose bounding hyperplanes are independent and identically distributed. Under suitable distributional assumptions, we prove an asymptotic formula for the expectation of the difference of the mean widths of $K^{(n)}$ and K, and another asymptotic formula for the expectation of the number of facets of $K^{(n)}$. These results are achieved by establishing an asymptotic result on weighted volume approximation of $K$ and by"dualizing"it using polarity.


Introduction
Let K be a convex body (compact convex set with nonempty interior) in d-dimensional Euclidean space R d . The convex hull K (n) of n independent random points in K chosen according to the uniform distribution is a common model of a random polytope contained in K. The famous four-point problem of Sylvester [43] is the starting point of an extensive investigation of random polytopes of this type. Beside specific probabilities as in Sylvester's problem, important objects of study are expectations, variances and distributions of various geometric functionals associated with K (n) . Typical examples of such functionals are volume, other intrinsic volumes, and the number of i-dimensional faces. In their ground-breaking papers [31,32], Rényi and Sulanke considered random polytopes in the Euclidean plane and proved asymptotic results for the expectations of basic functionals of random polytopes in a convex domain K in the cases where K is sufficiently smooth or a convex polygon. Since then most results have been in the form of asymptotic formulae as the number n of random points tends to infinity.
In the last three decades, much effort has been devoted to exploring the properties of this particular model of a random polytope contained in a d-dimensional convex body K. For instance, for a sufficiently smooth convex body K, asymptotic formulae were proved for the expectation of the mean width difference W (K) − W (K (n) ) by Schneider and Wieacker [38], and for the volume difference V (K) − V (K (n) ) by Bárány [1]. The assumption of smoothness was relaxed in the case of the mean width by Böröczky, Fodor, Reitzner and Vígh [10], and removed by Schütt [40] in the case of the volume. General intrinsic volumes are treated in [11] under a weak smoothness assumption. Recently, even variance estimates, laws of large numbers and central limit theorems have been proved in different settings in a sequence of contributions, for instance by Bárány, Reitzner, Schreiber, Vu and Yukich; see [3,5,29,30,39,44,45]. For more details on the current state-of-the-art of this line of research, see the survey papers by Weil and Wieacker [46], Gruber [16] and Schneider [36], and the recent monograph of Schneider and Weil [37].
In a third paper, Rényi and Sulanke [33] suggested a model that is 'dual' to the model of a random polytope contained in a given convex body K (a random inscribed polytope), that is, they considered a random polytope containing a given convex body (a random circumscribed polytope). Subsequently, this approach has not received as much attention as the 'inscribed case', although it is closely related to linear optimization (cf. [6, 34, § 6]). There are various ways of producing circumscribed random polytopes containing a given convex body. In this paper, we consider a model in which the circumscribed polytope arises as an intersection of closed halfspaces whose bounding hyperplanes are randomly chosen hyperplanes. A precise description of this probability model is given in Section 2, and a more general setting is considered in Section 5. Here we just provide the following rough description. In Euclidean space R d , we consider hyperplanes that intersect the parallel domain at radius one of a given convex body K but miss the interior of K, and we use the corresponding restriction of the (suitably normalized) Haar measure on the set of hyperplanes in R d to provide an associated probability measure. For n independent random hyperplanes H 1 , . . . , H n , chosen according to this distribution, the intersection of the closed halfspaces bounded by H 1 , . . . , H n and containing K determines a circumscribed random polyhedral set containing K (which might be unbounded). The main goal of this paper is to find asymptotic formulae for the expectation of the difference of the mean widths of a random circumscribed polytope and the given convex body K, and for the expectation of the number of facets of a circumscribed random polyhedral set. These (and more general) results will be achieved by establishing general results on weighted volume approximation of a given convex body by inscribed random polytopes. In all these results, no regularity or curvature assumptions on K are required.
As for earlier results, we mention the paper by Ziezold [50], who investigated circumscribed polygons in the plane, and the doctoral dissertation of Kaltenbach [22], who proved asymptotic formulae for the expectation of the volume difference and for the expectation of the number of vertices of circumscribed random polytopes around a convex body, under the assumption that the boundary of the reference body K is sufficiently smooth. Recently, Böröczky and Schneider [9] established upper and lower bounds for the expectation of the mean width difference for a general convex body K. Furthermore, they also proved asymptotic formulae for the expected number of vertices and facets of K (n) , and an asymptotic formula for the expectation of the mean width difference, under the assumption that the reference body K is a simplicial polytope with r facets (cf. [2] for a related contribution).
In [8], Böröczky and Reitzner discuss a different model of a random circumscribed polytope where n independent random points are chosen from the boundary of a given smooth convex body K, and the intersection of the supporting halfspaces of K at these points is the random polyhedral set under consideration. This framework is again dual to the one considered by Schütt and Werner (see [42]), who study the expected volume of the convex hull of n independent random points chosen from the boundary of a convex body satisfying a weak regularity assumption.

The probability space and the main goal
We first describe the setting for stating our results on circumscribed random polyhedral sets. Throughout this paper, K denotes a compact convex set with interior points (a convex body) in d-dimensional Euclidean space R d (d 2). We write · , · for the scalar product and · for the norm in R d . For background on convexity, we refer to the monographs by Schneider [35] or by Gruber [17]. Let V denote volume and let H j denote the j-dimensional Hausdorff measure. The unit ball of R d with center at the origin o is denoted by B d , and S d−1 is its boundary.
We consider α d := V (B d ) and ω d : , the measure μ K is a probability measure. For n ∈ N, let H 1 , . . . , H n be independent random hyperplanes in R d , that is, independent H-valued random variables on some probability space (Ω, A, P), each with distribution μ K . The possibly unbounded intersection . . , n, is a random polyhedral set. A major aim of the present work is to investigate EW (K (n) ∩ K 1 ), where E denotes mathematical expectation. The intersection with K 1 is considered, since K (n) is unbounded with positive probability. Instead of EW (K (n) ∩ K 1 ), we could consider E 1 W (K (n) ), the conditional expectation of W (K (n) ) under the condition that with γ ∈ (0, 1) (cf. [9]), there is no difference in the asymptotic behaviors of both quantities, as n → ∞. We also remark that, for the asymptotic results, the parallel body K 1 could be replaced by any other convex body containing K in its interior; this would only affect some normalization constants.
Let ∂K denote the boundary of K. We call ∂K twice differentiable in the generalized sense at a boundary point x ∈ ∂K, if there exists a quadratic form Q on R d−1 , the second fundamental form of K at x, with the following property. If K is positioned in such a way that x = o and R d−1 is a support hyperplane of K at o, then in a neighborhood of o we see that ∂K is the graph of a convex function f defined on a (d − 1)-dimensional ball around o in R d−1 satisfying as z → o. Alternatively, we call x a normal boundary point of K. If this is the case, we write κ(x) = det(Q) to denote the generalized Gaussian curvature of K at x. Writing κ(x), we always assume that ∂K is twice differentiable in the generalized sense at x ∈ ∂K. According to a classical result of Alexandrov (see [17,35]), ∂K is twice differentiable in the generalized sense almost everywhere with respect to the boundary measure of K (in other words, H d−1 -almost all boundary points are normal boundary points). Finally, we define the constant [49]), which will appear in the statements of our main results. In the following, we simply write dx instead of H d (dx). The main asymptotic result concerning the expected difference of the mean widths of K (n) and K is the following theorem. Generalizations of Theorem 2.2, and also of Theorem 2.1 below, which hold under more general distributional assumptions, are provided in Section 5. There we also indicate the connection to the p-affine surface area of a convex body.
Let f i (P ), where i ∈ {0, . . . , d − 1}, denote the number of i-dimensional faces of a polyhedral set P . In the statement of the following theorem, K (n) could be replaced by the intersection of K (n) with a fixed polytope containing K in its interior without changing the right-hand side. Alternatively, instead of E(f d−1 (K (n) )) we could consider the conditional expectation of f d−1 (K (n) ) under the assumption that K (n) is contained in K 1 .
Both theorems will be deduced from a 'dual' result on weighted volume approximation of convex bodies by inscribed random polytopes. This result is stated in the subsequent section. The usefulness of duality in random or best approximation has previously been observed for example in [12,15,22].

Weighted volume approximation by inscribed polytopes
For a given convex body, we introduce a class of inscribed random polytopes. Let C be a convex body in R d , let be a bounded nonnegative measurable function on C and let H d C denote the restriction of H d to C. Assuming that C (x) H d (dx) > 0, we choose random points from C according to the probability measure given by Expectation with respect to P ,C is denoted by E ,C . The convex hull of n independent and identically distributed random points with distribution P ,C is denoted by C (n) if is clear from the context. This yields a general model of an inscribed random polytope.
Generalizing a result by Schütt [40], we prove the following theorem.
Theorem 3.1. For a convex body K in R d , a probability density function on K, and an integrable function λ : K → R such that, on a neighborhood of ∂K relative to K, λ and are continuous and is positive, we have The limit on the right-hand side of (3.1) depends only on the values of and λ on the boundary of K. In particular, we may prescribe any continuous positive function on ∂K. Then any continuous extension of to a probability density on K (there always exists such an extension) will satisfy Theorem 3.1 with the prescribed values of on the right-hand side.
Our proof of Theorem 3.1 is inspired by the approach in [40], where the special case ≡ λ ≡ 1 is considered. We note that for [40,Lemma 2], which is crucial for the proof in [40], no explicit proof is provided, but reference is given to an analogous result in an unpublished note by M. Schmuckenschläger. Besides a missing factor 1 2 , Lemma 2 does not hold in the generality stated in [40]. For instance, it is not true for simplices. Most probably, this gap can be overcome, but still our approach to prove Theorem 3.1, where [40,Lemma 2] is replaced by the present more elementary Lemma 4.2, might be of some interest.
The present partially new approach to Theorem 3.1 involves also some other interesting new features. In particular, we do not need the concept of a Macbeath region. An outline of the proof is given below. It should also be emphasized that the generality of Theorem 3.1 is needed for our study of circumscribed random polyhedral sets via duality.
A classical argument going back to Efron [13] shows that which yields the following consequence of Theorem 3.1.
Corollary 3.2. For a convex body K in R d , and for a probability density function on K which is continuous and positive on a neighborhood of ∂K relative to K, we have The proof of Theorem 3.1 is obtained through the following intermediate steps. Details are provided in Section 4. Since the convex body K is fixed, we write E and P instead of E ,K and P ,K , respectively. The basic observation to prove Theorem 3.1 is that which is an immediate consequence of Fubini's theorem. Throughout the proof, we may assume that o ∈ int(K). The asymptotic behavior, as n → ∞, of the right-hand side of (3.2) is determined by points x ∈ K which are sufficiently close to the boundary of K. In order to give this statement a precise meaning, scaled copies of K are introduced as follows. For t ∈ (0, 1), we define K t := (1 − t)K and y t := (1 − t)y for y ∈ ∂K. In Lemma 4.3, we show that lim n→∞ n 2/(d+1) This limit relation is based on a geometric estimate of P (x ∈ K (n) ), provided in Lemma 4.1, and on a disintegration result stated as Lemma 4.2.
For y ∈ ∂K, we write u(y) for some exterior unit normal of K at y. This exterior unit normal is uniquely determined for H d−1 -almost all boundary points of K. Applying the disintegration result again and using Lebesgue's dominated convergence result, we finally get For the subsequent analysis, it is sufficient to consider a small cap of K at a normal boundary point y ∈ ∂K. The case κ(y) = 0 is treated in Lemma 4.4. The main case is κ(y) > 0. Here we reparametrize y t asỹ s , in terms of the probability content of a small cap of K whose bounding hyperplane passes through y t . This implies that cf. (4.26). It is then a crucial step in the proof to show that the remaining integral asymptotically is independent of the particular convex body K, and thus the limit of the integral is the same as for a Euclidean ball (see Lemma 4.6). To achieve this, the integral is first approximated, up to a prescribed error of order ε > 0, by replacing P (ỹ s ∈ K (n) ) by the probability of an event that depends only on a small cap of K at y and on a small number of random points. This important step is accomplished in Lemma 4.5. For the proofs of Lemmas 4.5 and 4.6 it is essential that the boundary of K near the normal boundary point y can be suitably approximated by the osculating paraboloid of K at y.

Proof of Theorem 3.1
To start with the actual proof, we fix some further notation. For y ∈ ∂K and t ∈ (0, 1), we define the cap C(y, t) := {x ∈ K : u(y), x u(y), y t } whose bounding hyperplane passes through y t and has normal u(y).
For y ∈ ∂K, we denote by r(y) the maximal number r 0 such that y − ru(y) + rB d ⊂ K. This number is called the interior reach of the boundary point y. It is well known that For real functions f and g defined on the same space I, we write f g or f = O(g) if there exists a positive constant γ, depending only on K, and λ, such that |f | γ · g on I. In general, we write γ 0 , γ 1 , . . . to denote positive constants depending only on K, and λ. The Landau symbol o(·) is defined as usual. We further consider R + := [0, ∞).
Finally, we observe that there exists a constant γ 0 ∈ (0, 1) such that, for y ∈ ∂K, we have | y, u(y) | γ 0 y , and hence y|u(y) where y|u ⊥ denotes the orthogonal projection of y onto the orthogonal complement of the vector u ∈ R d \ {o}. Subsequently, we always assume that n ∈ N.
(ii) In the following, we use the notion of a 'coordinate corner'. Given an orthonormal basis in a linear i-dimensional subspace L, the corresponding (i − 1)-dimensional coordinate planes cut L into 2 i convex cones, which we call coordinate corners (with respect to L and the given basis).
Proof of Lemma 4.1. If r(y) = 0, then there is nothing to prove. Therefore let r(y) > 0, thence u(y) is uniquely determined. Choose an orthonormal basis in u(y) ⊥ , and let Θ 1 , . . . , Θ 2 d−1 be the corresponding coordinate corners in u(y) ⊥ . For i = 1, . . . , 2 d−1 and t ∈ [0, 1], we define If δ > 0 is small enough to ensure that > 0 is positive and continuous in a neighborhood (relative to K) of ∂K, then If y t ∈ K (n) and o ∈ K (n) , then there exists a hyperplane H through y t , bounding the halfspaces Finally, we prove that and we are done. On the other hand, if t γ 3 r(y), then To deal with the case o ∈ K (n) , we observe that there exists a positive constant γ 5 ∈ (0, 1) such that the probability measure of each of the 2 d coordinate corners of R d is at least γ 5 . If o ∈ K (n) , then {x 1 , . . . , x n } is disjoint from one of these coordinate corners, and hence Now the assertion follows from (4.2)-(4.4).
Subsequently, the estimate of Lemma 4.1 will be used, for instance, to restrict the domain of integration on the right-hand side of (3.2) (cf. Lemma 4.3) and to justify an application of Lebesgue's dominated convergence theorem (see (4.9)). For these applications, we also need that if c > 0 is such that ω := c δ (d+1)/2 < 1, then The next lemma allows us to decompose integrals in a suitable way.
everywhere) unique exterior unit normal of ∂K at y. The assertion now follows from Federer's area/coarea theorem (see [14]).
In the following, for α > −1, we use the important fact that which is a result due to Schütt and Werner [41]. By decomposing λ into its positive and its negative part, we can henceforth assume that λ is a nonnegative integrable function. Lemma 4.3. As n tends to infinity, we have Proof. Let δ > 0 be chosen as in Lemma 4.1 and the subsequent remark. First, we consider a point x in K δ . Let ω be the minimal distance between the points of ∂K and K δ , and let z 1 , . . . , z k be a maximal family of points in By the maximality of the set {z 1 , . . . , z k }, we have Let z j lie in the intersection. Then z j + ω (4.7) Consider ε := (2(d 2 − 1)) −1 and let n δ −(d+1) . For y ∈ ∂K we show that In fact, if r(y) n −(d+1)ε , then Lemma 4.1 and (4.5) yield where the assumption on r(y) is used for the last estimate.
If r(y) n −(d+1)ε and n n 0 , where n 0 depends on K, and λ, then Lemma 4.1 implies, for all t ∈ (n −1/(d+1) , δ), that which again yields (4.8). In particular, writing I to denote the integral in Lemma 4.3, we obtain from Lemma 4.2, (4.7), (4.8) and (4.6) that where we also used the fact that λ is integrable on K and bounded on K \ K δ . This is the required estimate.
It follows from (3.2), Lemmas 4.3 and 4.2 that Lemma 4.1 and (4.5) imply that, if y ∈ ∂K and r(y) > 0, then Therefore, by (4.6) and since λ is bounded and continuous in a neighborhood of ∂K, we may apply Lebesgue's dominated convergence theorem, and thus Proof. In view of the estimate (4.4), it is sufficient to prove that, for any given ε > 0 if n is sufficiently large. We choose the coordinate axes in u(y) ⊥ parallel to the principal curvature directions of K at y, and denote by Θ 1 , . . . , Θ 2 d−1 the corresponding coordinate corners. For i = 1, . . . , 2 d−1 and t ∈ (0, n −1/(d+1) ), let and hence, if n is large enough, then since is continuous and positive near ∂K. If y t ∈ K n and o ∈ K (n) , then there exists a halfspace H − that contains K (n) and for which y t ∈ ∂H − . Moreover, for some i ∈ {1, . . . , 2 d−1 } the interior of H − is disjoint from Θ i,t . Hence, as in the proof of Lemma 4.1, we have Since ∂K is twice differentiable in the generalized sense at y, we have r(y) > 0. By assumption, κ(y) = 0; therefore one principal curvature at y is zero, and hence less than ε d+1 r(y) d−2 .
Next we consider the case of a normal boundary point y ∈ ∂K with κ(y) > 0. First, we prove that J (y) depends only on the random points near y (see Lemma 4.5). In a second step, we compare the simplified expression obtained for J (y) with the corresponding expression which is obtained if K is a ball.
We start by reparametrizing y t in terms of the probability measure of the corresponding cap. For t ∈ (0, n −1/(d+1) ), where n n 0 is sufficiently large so that is positive and continuous on C(y, t), for all y ∈ ∂K, we considerỹ where, for given s > 0 (sufficiently small), the corresponding t = t(s) is determined by the relation It is easy to see that the right-hand side of (4.12) is a continuous and strictly increasing function s = s(t) of t, if t > 0 is sufficiently small. This implies that, for a given s > 0 (sufficiently small), there is a unique t(s) such that (4.12) is satisfied. Moreover, observe that ds dt = u(y), y for t ∈ (0, n −1/(d+1) ). We further definẽ where k i (y), with i = 1, . . . , d − 1, are the generalized principal curvatures of K at y and where Since y is a normal boundary point of K, there is a nondecreasing function μ : (0, ∞) → R with lim r→0 + μ(r) = 1 such that where K(u, r) := K ∩ H(u, h(K, u) − r). In the following, μ i : (0, ∞) → R, where i = 1, 2, . . ., always denote nondecreasing functions with lim r→0 + μ(r) = 1. Applying (4.14) and Fubini's theorem, we get which yields since is continuous at y. Moreover, defining we obtain in the sense of the Hausdorff metric on compact convex sets (see [17] or [35]). Here we also use the fact that lim Now it follows from (4.13) and (4.16) that (4.9) turns into The rest of the proof is devoted to identifying the asymptotic behavior of the integral. First, we adjust the domain of integration and the integrand in a suitable way. In a second step, the resulting expression is compared to the case where K is the unit ball. We recall that x 1 , . . . , x n are random points in K, and we consider Ξ n := {x 1 , . . . , x n }, and hence K (n) = [Ξ n ]. Let #X denote the cardinality of a finite set X ⊂ R d .
To prove (i), we first observe that If α/n < s < g(n, y), o ∈ K (n) ,ỹ s ∈ K (n) and if n is sufficiently large, then there is some i ∈ {1, . . . , 2 d−1 } such that Θ i,s ∩ K (n) = ∅, and hence (4.4) and (4.18) yield Therefore, by the definition of α, we get g(n,y) which verifies (i). Next (ii) simply follows from (4.12) as, if s < α/n, then Now we prove (iii). To this end, for s in the given range, our plan is to construct sets Ω 1,s , . . . , Further, for i = 1, . . . , 2 d−1 we define Then, if s > 0 is small enough, we haveỹ √ β s + w i ∈ K, and hence Ω i,s ⊂ K. Here we use the fact that 1 2 ηE and therefore, by (4.16), we havẽ t)y, where s and t are related by (4.15). Hence, if s, t > 0 are sufficiently small, then  (4.20).
Remark. As a consequence of the proof of Lemma 4.5, it follows that In fact, since g(n, y) n −1/2 , it is sufficient to show that lim n→∞ n 2/(d+1) for any two constants 0 < c 1 c 2 < ∞. Since the estimate (4.19) can be applied, we get from which the conclusion follows.
Subsequently, we write 1 to denote the constant one function on R d . For the unit ball B d , we recall that B d (n) denotes the convex hull of n random points distributed uniformly and independently in B d . We fix a point w ∈ ∂B d and, for s ∈ (0, 1 2 ), definew s := t · w, where t ∈ (0, 1) is chosen such that By a classical result due to Wieacker [49], we have where the constant c d is given in (2.2). Hence, it follows from (4.9), (4.26) and the preceding remark that (4.27) We are now going to show that the same limit is obtained if B d is replaced by the convex body K and if a normal boundary point y of K with positive Gauss curvature is considered instead of w ∈ ∂B d . Lemma 4.6. If y ∈ ∂K is a normal boundary point of K satisfying κ(y) > 0, then Proof. Let ε ∈ (0, 1) be arbitrarily chosen. According to Lemma 4.5 and its notation, and by the preceding remark, if n is sufficiently large, we have (4.28) We fix a unit vector p, and consider the reference paraboloid Ψ which is the graph of z → z 2 on p ⊥ . For τ > 0, define that is, a cap of Ψ of height τ 2/(d+1) . It is easy to check that V (C(τ )) = τ V (C(1)). We definẽ Then (4.12) implies that s(β, s) = βs μ(β, s) (y)βV (C(1)) = s μ(β, s) (y)V (C(1)) , where μ(β, s) → 1 as s → 0 + . Let A s , with s > 0, denote the affinity of R d with A s (y) = y for which the associated linear mapÃ s satisfiesÃ s (v) = s 1/(d+1) v for v ∈ u ⊥ andÃ s (u) = s 2/(d+1) u. Then the image under A s −1 of a cap of K at y converges in the Hausdorff metric, as s → 0 + , to a cap of the osculating paraboloid of K at y. For a more explicit statement, let A be a volume-preserving affinity of R d such that A(y) = o and A(y − u) = p, which maps the osculating paraboloid of K at y to Ψ. Then Φ s,β := A • As (β,s) −1 is an affinity satisfying and, consequently, Φ s,β (C(y, βs)) → C(β) in the Hausdorff metric as s → 0 + . Moreover, we have since μ(β, s) → 1 and μ(1, s) → 1 as s → 0 + ,ỹ s ∈ ∂C(y, s) and Φ s,1 (ỹ s ) ∈ ∂C(1), and by (4.17).

Polarity and the proof of Theorem 2.1
In this section, we deduce Theorem 2.1 and Theorem 2.2 from Theorem 3.1 and Corollary 3.2, respectively. In order to obtain more general results, for not necessarily homogeneous or isotropic hyperplane distributions, we start with a description of the basic setting.
Let K ⊂ R d be a convex body with o ∈ int(K), let K * := {z ∈ R d : x, z 1 for all x ∈ K} denote the polar body of K and consider K 1 := K + B d . Let H K denote the set of all hyperplanes H in R d for which H ∩ int(K) = ∅ and H ∩ K 1 = ∅. The motion invariant locally finite measure μ on the space A(d, d − 1) of hyperplanes, which satisfies μ(H K ) = 2, is explicitly given by where σ is the rotation invariant probability measure on the unit sphere S d−1 . The model of a random polytope (random polyhedral set) described in the introduction is based on random hyperplanes with distribution μ K := 2 −1 (μ H K ). More generally, we now consider random hyperplanes with distribution is a measurable function which has the following properties: The intersection of n halfspaces H − i containing the origin o and bounded by n independent random hyperplanes H i with distribution μ q is denoted by K (n) := n i=1 H − i . Probabilities and expectations with respect to μ q are denoted by P μq and E μq , respectively. The special example q ≡ 1 DK (q is the characteristic function of D K ) covers the situation discussed in the introduction.
In the following, as well as the support function, we also need the radial function ρ(L, ·) of a convex body L with o ∈ int(L). Let F be a nonnegative measurable functional on convex polyhedral sets in R d . Using (5.1) and Fubini's theorem, we get (u 1 , . . . , u n )).
For t 1 , . . . , t n > 0, we have Using the substitution , and polar coordinates, we obtain The case n = 1 and F ≡ 1 yields 1 and hence x ∈ K * 1 , is a probability density with respect to H d K * that is positive and continuous in a neighborhood of ∂K * relative to K * . Thus, we have Proposition 5.1. Let K ⊂ R d be a convex body with o ∈ int(K), and let q and be defined as above. Then the random polyhedral sets K (n) and (K * (n) ) * are equal in distribution.
For a first application, let for a polyhedral set P ⊂ R d , with the convention 0 · ∞ := 0. For x 1 , . . . , x n ∈ K * \ K * 1 , we have K ⊂ [x 1 , . . . , x n ] * and, arguing as before, we have . , x n ]. As in [9], it can be shown that P μq (K (n) ⊂ K 1 ) α n , for some α ∈ (0, 1) depending on K and q. By Proposition 5.1, we also get Hence we have where we used that λ is integrable. Therefore, by Theorem 3.1, we have where κ * denotes the generalized Gauss curvature of K * . In the following, for x ∈ ∂K, let σ K (x) denote an exterior unit normal vector of K at x. It is unique for H d−1 -almost all x ∈ ∂K.
is positive and continuous, then q can be extended to [0, ∞) × S d−1 such that (q1)-(q3) are satisfied. For any such extension, the right-hand side of (5.2) remains unchanged. As an example, we may choose q 1 such that q 1 (t, u) = t (d 2 −1)/2 for t = h(K, u) and u ∈ S d−1 . Then the integral in (5.2) turns into is the p-affine surface area of K (see [18, 19, 23-25, 27, 47, 48]). It has been shown that Ω d 2 (K) = Ω 1 (K * ); see [19]. Moreover, for a convex body L ⊂ R d , the equiaffine isoperimetric inequality states that with equality if and only if L is an ellipsoid (cf. [7,18,[26][27][28]). Thus we get with equality if and only if K * is an ellipsoid, that is, if and only if K is an ellipsoid. This can be interpreted as saying that, among all convex bodies for which the volume of the polar body is fixed, ellipsoids are worst approximated asymptotically by circumscribed random polytopes (with respect to the density q 1 ) in the sense of the mean width.
For another application, we define for a convex polyhedral set P ⊂ R d . It is well known that f 0 (P ) = f d−1 (P * ) for a convex polytope P ⊂ R d with o ∈ int(P ). Thus, from Proposition 5.1, we get The following Theorem 5.3 generalizes Theorem 2.1 in the same way as Theorem 5.2 extends Theorem 2.2.
The proof follows by applying Corollary 3.2 and Lemma 6.2.

Polarity and an integral transformation
In this section, we establish the required integral transformation involving the generalized Gauss curvatures of a convex body and its polar body. The main difficulty of the proof is due to the fact that we do not make any smoothness assumptions on the convex bodies that are considered.
Let L ⊂ R d be a convex body. If the support function h L of L is differentiable at u = o, then the gradient ∇h L (u) of h L at u is equal to the unique boundary point of L having u as an exterior normal vector. In particular, the gradient of h L is a function that is homogeneous of degree zero. Note that h L is differentiable at H d−1 -almost all unit vectors. We write D d−1 h L (u) for the product of the principal radii of curvature of L in direction u ∈ S d−1 , whenever the support function h L is twice differentiable in the generalized sense at u ∈ S d−1 . Note that this is the case for H d−1 -almost all u ∈ S d−1 . The Gauss map σ L is defined H d−1 -almost everywhere on ∂L. If σ L is differentiable in the generalized sense at x ∈ ∂L, which is the case for H d−1almost all x ∈ ∂L, then the product of the eigenvalues of the differential is the Gauss curvature κ L (x). The connection to curvatures defined on the generalized normal bundle N (L) of L will be used in the following proof (cf. [20]). Lemma 6.1. Let L ⊂ R d be a convex body containing the origin in its interior.
Proof. In the following proof, we use results and methods from [20], to which we refer the reader for additional references and detailed definitions. Let N (L) denote the generalized normal bundle of L and let k i (x, u) ∈ [0, ∞], with i = 1, . . . , d − 1, be the generalized curvatures of L, which are defined for H d−1 -almost all (x, u) ∈ N (L). Expressions such as with k i (x, u) = ∞ are understood as limits as k i (x, u) → ∞, and yield 0 or 1, respectively, in the two given examples. As is common in measure theory, the product 0 · ∞ is defined as 0. Our starting point is the expression which will be evaluated in two different ways. A comparison of the resulting expressions yields the assertion of the lemma. First, we rewrite I in the form where for H d−1 -almost all (x, u) ∈ N (L), is the (approximate) Jacobian of the map π 2 : N (L) → S d−1 , (x, u) → u. To check (6.2), we distinguish the following cases. If k i (x, u) = 0 for some i, then the integrands on the right-hand sides of (6.1) and (6.2) are zero, since 0 · ∞ = 0 and J d−1 π 2 (x, u) = 0. If k i (x, u) = 0 for all i and k j (x, u) = ∞ for some j, then again both integrands are zero. In all other cases the assertion is clear.
For H d−1 -almost all u ∈ S d−1 , we see that ∇h L (u) ∈ ∂L is the unique boundary point of L which has u as an exterior unit normal vector. Then the coarea formula yields Using [20,Lemma 3.4], we get Now we consider also the projection π 1 : N (L) → ∂L, (x, u) → x, which has the (approximate) Jacobian for H d−1 -almost all (x, u) ∈ N (L). A similar argument as before yields By [20, Lemma 3.1], we also get Remark. An alternative argument can be based on arguments similar to those used in [18] for the proof of the equality of two representations of the affine surface area of a convex body.
Proof. We apply Lemma 6.1 with L = K * and g(x) =f (x) x −d+1 , with x ∈ ∂K * , and thus we get Next we apply Theorem 2.2 in [19] (or the second part of [21, Corollary 5.1]). Thus, using the fact that, for H d−1 -almost all u ∈ S d−1 , we have that h K * is differentiable in the generalized sense at u and ρ(K, u)u is a normal boundary point of K, we have: where x = ρ(K, u)u ∈ ∂K and u = x −1 x ∈ S d−1 . Hence, we have The bijective and bilipschitz transformation T : S d−1 → ∂K, u → ρ(K, u)u, has the Jacobian for H d−1 -almost all u ∈ S d−1 (see the proof of [19,Lemma 2.4]). Therefore, we have since h K * (x) = 1 for x ∈ ∂K and x * := ∇h K * (x) satisfies x * −1 = x, σ K (x) as well as x * / x * = σ K (x), for H d−1 -almost all x ∈ ∂K.
Note. In Theorem 3.1 and Corollary 3.2 the assumptions on λ and can be slightly weakened. It is sufficient to assume that the integrable function λ and the probability density are continuous and is positive, at each boundary point of K. This follows from a compactness argument that shows that λ is bounded and is bounded from above and from below by a positive constant in a suitable neighborhood of ∂K. A compactness argument also yields the continuity properties of λ and that are actually needed in the proofs. A similar remark applies to property (q2) in Section 5.