Optimization on a Fractal Feasible Region

© Copyright 1991, 1996 by Timothy D. Greer. Since this article has not been published elsewhere, I reserve rights to it that are not explicitly mentioned in the IBM copyright statement below.


This paper presents a method for optimizing a linear function on a fractal feasible region. The method requires that the fractal be represented as an iterated function system with a finite number of affine mappings. The technique is to construct a nested sequence of convex sets (polyhedrons) which contain the fractal, and then use information derived from the construction of the convex sets to find a feasible solution near the upper bound made available by enclosing the fractal in the convex sets. The feasible solution and the upper bound may be made arbitrarily close.

Some typographical notes: The HTML character set lacks a lot of mathematical symbols, so I'm forced to improvise. R² looks ok for 2-dimensional Euclidean space, but for n dimensions I had to use R**n. There are no symbols for set relations, so I've chosen » to indicate superset. I.e. A » B means "A contains B". I've also used parenthesis in places I would have preferred subscripts.


Consider the problem of maximizing a linear function constrained to a fractal feasible region. The problem might come up when using fractals as realistic models of physical objects [1]. "How tall is my fractal tree?", for instance. Fractals have also been proposed as models of business relationships, where decisions based on the model require solution of this problem [2]. Here is a solution method for fractals defined as iterated function systems [3]. The method works for a fractal imbedded in any Euclidean space, although it becomes progressively less efficient in higher dimensions because the number of points, which we must find the convex hull of, may be expected to increase. Iterated function systems are a convenient way of defining a large class of fractals [4].

In this paper we deal with the relatively simple case of iterated systems made up of affine functions. If all the functions are contraction mappings, the iterated function system comprised of them uniquely defines [5] the fractal which is our feasible region. Barnsley [6] describes how to determine contraction mappings which define a fractal approximating a desired shape.

The fractal is thus defined by a collection of affine contraction mappings in R**n. In R², if a circle is the domain of such a mapping, the range will be an ellipse whose major axis is shorter than the diameter of the circle. (The ellipse may of course be degenerate -- a line segment or a point -- and may be a circle.) If a circle encloses a fractal defined by such mappings, the union of the images of the circle under the mappings will contain the fractal, by definition of the fractal. Analogous things happen in higher dimensions. It is helpful to keep this geometric image in mind for what follows.

Detail and Proof of Method

What follows is a method to reduce our optimization problem to one of optimization on a polyhedron. Since approaches to linear optimization on a polyhedron are abundant and well-studied, we will regard our problem as solved at that point. We find a hypersphere containing the fractal, then a polyhedron containing the fractal, and then a nested sequence of polyhedrons containing the fractal. The nested sequence has the convex closure of the fractal as its limit, so by solving a (linear) optimization problem on the inner-most of the nested polyhedrons, we have an approximation to the solution on the convex closure of the fractal. We then use information from the construction of the polyhedrons to obtain a nearby feasible point.

In [7], the "lasso algorithm" is presented as a way to determine a minimal hypersphere which encloses its image under each of the mappings comprising an iterated function system. Using this as a starting point, we have an upper bound for the extent of the fractal in any given direction. Now increase the radius of the enclosing hypersphere by some small amount, to obtain a hypersphere we will call H. H has the property that it completely encloses, and does not intersect, its image under each of the mappings.

We now find a polyhedron contained in H which contains all the images of the hypersphere under the mappings. In R² we pick an arbitrary direction and, for the image of H under each mapping, draw the tangents perpendicular to that direction. Since the number of mappings is finite, one of these tangents is farthest in the given direction. All the images of H, and hence also the fractal, lie to one side of this tangent. The vector from the center of H to one of the points of intersection of H and this farthest tangent give us a new direction. Repeat, finding the farthest tangent in this new direction, and choose the intersection with H on the fractal side of the previous tangent, to get yet a third direction. Since H contains and does not intersect its image under each map, it is apparent that the intersection of each successive pair of these "farthest tangents" will be in the interior of H. In addition, because the number of mappings is finite, each successive direction will be more than infinitesimally farther around the circle; repeated application of this process will move the new direction all the way around the circle. When the new direction has progressed more than 360 degrees, stop. The intersections of the successive farthest tangents, plus the intersection of the last with the first, provide the vertices of the desired polygon.

The description in R**n is perhaps harder to follow, but is essentially the same. Pick a direction. Using ordinary optimization techniques of vector calculus, one can find the supporting hyperplane in this direction for the image of H under each mapping. Thus we can find the farthest such hyperplane in the picked direction. Since the hypersphere encloses the fractal, and the union of its images under the various mappings also encloses the fractal, the fractal must lie entirely on the side of the hyperplane where the images of H are. Choosing a point on the intersection of the hyperplane and H defines a new direction from the center of H. We can likewise find a supporting hyperplane in this new direction, which also has the fractal to one side only. Another hyperplane is obtained by choosing a point on the intersection of H and the second hyperplane, to the fractal side of both the first and second hyperplanes. Continue this process, always selecting the new point on the intersection of H and one of the hyperplanes, and always to the fractal side of all the hyperplanes. With each step, if there are any points left to select, it is always possible to select one which trims off some finite surface hypervolume of H. Because the images of H lie strictly within the interior of H, and there are a finite number of mappings, it is possible to select the point in such a way that the trimmed surface will not approach a zero limit. Thus eventually there will be no points left to select, and the intersections of the fractal-side half-spaces form a polyhedron which lies entirely in the interior of H, and which contains the fractal.

Not only does this polyhedron enclose the fractal, it encloses the image of H under each of the mappings. Since the polyhedron is in turn enclosed by H, the polyhedron also encloses its own image under each of the mappings. Let P denote the polyhedron, P(1) denote the polyhedron whose vertices are the convex hull of the images of P under all the mappings, and P(i) denote the polyhedron whose vertices are the convex hull of the images of P(i-1) for i>1. Then we have
H » P » P(1) » P(2) ... » P(i) ...
Furthermore, it follows trivially from Theorem 1 in [8] that the vertices of P(i) get arbitrarily close to the fractal as i --> infinity. Therefore,

       lim P(i)   = C
where C is the convex closure of the fractal.

We now have an upper bound for the extent of the fractal in any given direction. To find a feasible solution, notice that for each vertex on P(i) we can label the vertex by the mapping used to obtain that point from a vertex on P(i-1), concatenated with the mapping used to obtain that vertex from a vertex on P(i-2), and so on. Then each vertex of P(i) is labeled with the sequence of mappings required to obtain it from some initial point. Barnsley's Shadowing Theorem [9] assures us that starting with any point in the space, applying the mappings from a sufficiently long label will bring us to a point arbitrarily close to the labeled point. So to obtain a feasible point close to a vertex of P(i), start with a point known to be on the fractal (the fixed point of the first mapping, for example) and apply in order the i mappings of the vertex label. Thus we can obtain a feasible solution arbitrarily close to a point on the convex closure of the fractal.

Notes on implementation complexity

The lasso algorithm in [7] runs in O(n) time, where n is the number of mappings used to define the fractal. The time required to obtain the initial polyhedron enclosing the fractal depends on how much bigger H is than the hypersphere produced by the lasso algorithm. For each surface, O(n) calculations are required, where again n is the number of mappings. The time to produce each subsequent polyhedron depends on the time required to solve the convex hull problem. Finding the convex hull of a set of points is solvable in O(n log(n)) [10], where n is the number of points. The number of points generally is growing with each subsequent polyhedron, and ultimately depends on the shape of the fractal. With each subsequent polyhedron, the distance from a point on the polyhedron to the nearest point on the convex closure of the fractal will be reduced at least proportionally to the contractivity of the least-contractive mapping, so the number of iterations required for a given accuracy depends geometrically on the contractivity. It also clearly depends on how much bigger H is than the original hypersphere obtained by the lasso algorithm.

The author has implemented this algorithm for fractals imbedded in R². For higher-dimensional implementations, substantial programming effort might be avoided with minimal loss in efficiency by replacing H with two nested hyperspheres, each enclosing the fractal and the hypersphere produced by the lasso algorithm, adjusting their sizes so that the images of the larger hypersphere lie within the smaller. Then an initial polyhedron may be obtained by simply inscribing a polyhedron within the larger hypersphere, with enough faces that it contains the smaller hypersphere. Then we have a polyhedron that encloses its image under each mapping, and the iterative improvement using convex hulls can begin.


  1. Mandelbrot, Benoit B. The Fractal Geometry of Nature. W. H. Freeman and Co., New York, 1983, pp. 1-5.
  2. Greer, Timothy D. "Two Fractal Models for Operations Research" Society of Industrial Engineers Envoy , Volume 1, No. 3 (January/February 1992), pp. 3-5.
  3. Barnsley, Michael F. Fractals Everywhere. Academic Press, Boston, 1988, page 82.
  4. Ibid.
  5. Ibid.
  6. Ibid, pp. 96-113.
  7. Greer, Timothy D. " Method for Finding Geometrically Similar Fractal Confined to Fixed Region " IBM Technical Disclosure Bulletin , Volume 34, No. 7B (December 1991), pp. 24-25.
  8. Barnsley, Michael F. Fractals Everywhere. Academic Press, Boston, 1988, page 82.
  9. Ibid, page 160.
  10. Sedgewick, Robert. Algorithms. Addison-Wesley, Reading, Mass. 1983, page 332.

The information provided, and views expressed on this site are my own and do not represent the IBM Corporation.


Return to the author's home page.