In the world of structural engineering, we are challenged to consider many design options, to find the fittest solution in the shortest time, and to be leaders in economic structure design. In this article, you will discover how computational design techniques can help automate this process in design and fabrication workflows for building structures. You will be blown away by structural engineering workflows that you can use to optimize your designs with Dynamo. All kinds of techniques and computational design approaches will be considered to generate, evaluate, evolve, and cycle through structural design processes. You will understand how to perform your own design automation for repetitive structural tasks.
Outcome based BIM technology helps generate a virtually infinite number of design alternatives.
Start with your desired outcome, then explore a near infinite range of possibilities and produce optimal designs in a fraction of the time - the power of the cloud helps make this a reality today.
Designers and engineers can generate multiple options, ideas and scenarios more rapidly than ever before - exploring forms, simulating performance, and impressing clients with optimal designs.
Computational Design for BIM is an intelligent model-based process that provides a framework for negotiating and influencing the interrelation of internal and external building parameters.
In relation to design, computation involves the processing of information and interactions between elements which constitute an environment. Computational Design provides a framework for negotiating and influencing the interrelation of both internal and external properties, with the capacity to generate complex order, form, and structure. By combining the principles of computational design with Building Information Modeling, a fundamentally new method of building design is made possible.
Literally, “computational design” means “giving shape with the means of the power of a computer and scripting.”
Autodesk gives an answer to this new challenge in our design world. This solution is called Autodesk Dynamo. Dynamo lets designers and engineers create visual logic to explore parametric designs and automate tasks. It helps you to solve challenges faster by designing workflows that drive the geometry and behavior of design models. With Dynamo you will extend your designs into interoperable workflows for documentation, fabrication, coordination, simulation, and analysis.
Optimization is the selection of a best element (with regards to some criterion) from some set of available alternatives. In its simplest form, an optimization problem consists of maximizing or minimizing a function by choosing input values from a given range and computing the value of the output function.
From above optimization definition we can conclude that optimization is about finding the best solution within a set of alternatives. Usually when thinking of “optimization” we have one objective in mind which is maximized or minimized. This is called Single Objective Optimization (SOO) and is the process of finding the single best solution to a problem. With SOO you want to find the best solution by either not taking any other factors of the problem into account, or by combining all the factors of the problem into one objective.
Optimization in the AEC industry rarely is a single objective problem. Whether working with design, engineering or construction, finding one optimal solution is rarely possible or desired. Usually, optimization involves multiple competing objectives, therefore optimization becomes a matter of finding the best trade-off between these objectives rather than finding the one best solution.
Multi-objective optimization (MOO) is an area of multiple criteria decision making, that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization is applied in many fields where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives.
Minimizing cost while maximizing comfort while buying a car and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.
For a nontrivial multi-objective optimization problem, no single solution exists that simultaneously optimizes each objective. In that case, the objective functions are said to be conflicting, and there exists a (possibly infinite) number of Pareto optimal solutions. A solution is called non-dominated or Pareto optimal if none of the objective functions can be improved in value without degrading some of the other objective values. Without additional subjective preference information, all Pareto optimal solutions are considered equally good (as vectors cannot be ordered completely). The goal may be to find a representative set of Pareto optimal solutions, and/or quantify the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies the subjective preferences of a human decision maker.
For a design to be in the pareto optimal set it cannot be dominated by another solution. If a solution is worse than another solution on all objectives, then it is dominated and not in the pareto optimal set.
The curve connecting all non-dominated solutions (see figure above) is known as the pareto frontier. Deciding which solution on the pareto frontier is the favorite design, is up to the decision maker. When dealing with MOO problems some information is usually missing. Desirable objectives are given, but the final information that separates two solutions on the pareto frontier should be weighted by the decision maker. Often, the missing information is something that is difficult to measure like constructability and aesthetics, and therefore the decision-making needs human interaction.
Generative design is an iterative design process that involves a program that will generate a certain number of outputs that meet certain constraints, and a designer that will fine tune the feasible region by changing minimal and maximal values of an interval in which a variable of the program meets the set of constraints, in order to reduce or augment the number of outputs to choose from.
A Genetic Algorithm (GA) is a search-based optimization technique based on the principles of natural selection. GA’s are often used to find optimal or near optimal solutions to complicated projects, were a brute force method would take too long and use too much computing power.
A GA is a great method to solve MOO problems with, because you can localize more pareto optimal solutions in a large solution space, in an efficient way. Let’s look at a simple example of why a search-based optimization technique, like a GA, can be ideal to use for solving MOO problems.
“I'm thinking of a number between one and one billion. How long will it take for you to guess it? Solving a problem with 'brute force' refers to the process of checking every possible solution. Is it one? Is it two? Is it three? Is it four? And so forth. Though luck does play a factor here, with brute force we would often find ourselves patiently waiting for years while you count to one billion. However, what if I could tell you if an answer you gave was good or bad? Warm or cold? Very warm? Hot? Super, super cold? If you could evaluate how 'fit' a guess is, you could pick other numbers closer to that guess and arrive at the answer more quickly. Your answer could evolve." —Source: https://natureofcode.com/book/chapter-9-the-evolution-of-code/
The tools used here make use of the NSGA-II (Non-dominated Sorting Genetic Algorithm-II) evolutionary algorithm.
A typical workflow in a structural optimization process is represented in the flowchart below:
The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.
During each successive generation, a portion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time consuming.
The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent.
For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.
In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used.
The next step is to generate a second-generation population of solutions from those selected through a combination of genetic operators: crossover (also called recombination), and mutation.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents." New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes.
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
➢ A solution is found that satisfies minimum criteria
➢ Fixed number of generations reached
➢ The highest-ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.
While a brute-force search is simple to implement, and will always find a solution if it exists, its cost is proportional to the number of candidate solutions—which in many practical problems tends to grow very quickly as the size of the problem increases (combinatorial explosion).
Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than speed.
Case Study: Spatial Truss Deformation
In this case study a conceptual analysis is performed for the deformation of a spatial truss structure for a cantilever roof.
The optimization problem consists of finding the best truss configuration with three objectives:
- Maximize Platform Area
- Minimize Deformation
- Minimize Structure Weight
The configuration of the truss can be changed, by varying these inputs:
- Height of the truss at start
- Height of the truss at end
- Truss Divisions = number of panels/squares along the width
The top roof surface is defined by generating lines and loft them to a curved roof surface. The lines are defined by points, plotted with specific coordinates depending on the platform length (L), platform width (W), truss start depth (SD), truss end depth, and the platform top levels (Z1 and Z2). The bottom surface is defined in a similar way and will be used to project the bottom of the truss division.
With the custom node Quad Panels from the BIM4Struc.Productivity package, quad panels are created from the top surface.
The points from the quad panels are projected to the bottom surface and then reused to create the spatial truss lines, by using sorting methods, based on the X and Y axis positions of the points at top and bottom.
The conceptual analysis of the deformation is performed with the DynaShape package. This package allows the user to perform constraint-based form finding, optimization and physics simulation.
From the defined geometry, the anchor points are collected using sorting methods, based on the coordinates of the generated points.
The weight input values for the goals, define the importance of it in the analysis. The AnchorGoal simulates a support by keeping the truss node at a specified anchor point. By default, the weight for this goal is set very high to ensure the node really “sticks” to the anchor.
The LengthGoal makes it possible to maintain the specified distance between two nodes located at the start and end point of the given line. When you increase the value of weight you simulate an increase in stiffness of the line.
The ConstantGoal applies a constant directional offset to the specified points. For example, this is useful to simulate gravity.
The loads here are defined by the selfweight of a beam and the weight of a panel. This weight is distributed to the four corner points of a quad panel.
The weight will change depending on the platform dimensions.
The solver executes in silent mode. This is needed to make it compatible with Refinery in a later phase.
The GeometryColor node changes the color of the resulting geometry, to see a difference between the original structure (black) and the deformed shape (green). This deformation is influenced by the goals defined in Step 2 above.
The results from this analysis are used for minimizing the objectives.
- Deformation Ratio: Difference between original structure and structure after DynaShape analysis
- Material Score: Total material used, by making the sum of all curve lengths
- Platform Area: = Platform Width x Platform Length
Optimization in Refinery
The script can be run in a randomization (for brute-force search) or optimization process with Project Refinery.
The inputs and outputs needed for Refinery need to be defined in the Dynamo script.
The inputs can be sliders. They need to be indicated as “Is Input” in the contextual menu of the node (right-mouse click on the node).
The outputs need to be named Watch nodes. Naming the nodes can be done by double-clicking the header and change the name. Besides that, the node needs to be set as “Is Output” in the contextual menu of the node.
Before you create a new study in Refinery, make sure the script has been executed manually and the script is saved.
1. Open Refinery in Dynamo in the menu View > Refinery
2. Select the Generation Method
3. Expand Inputs and choose which inputs participate in automated runs
4. Expand Outputs and choose what optimization criteria should be applied.
5. Expand Settings to define the generation criteria.
6. Click Generate to start the run.
In Refinery, use the icons at the top to choose the output visualization (diagram, table, thumbnails, parallel coordinates diagram).
By clicking one of the thumbnails or a scatter point in the diagram or a line in the table, the input parameters in Dynamo change accordingly.
When the run method in Dynamo is set to Automatic, you can see immediate result in the Dynamo graph canvas.
Visualization of Geometry
The blue groups in the script contain nodes to represent the generated geometry with other colors.
Dieter Vermeulen works as a technical specialist for the Northern European region at Autodesk, specialized in the products of the Computational Design and Engineering portfolio. Within that domain he supports the authorized Autodesk channel partners and customers with innovative workflows and solution strategies. He evangelizes the power of computational design with Dynamo in the building and infrastructure industry. This results in workflows covering the process from design, analysis, construction to fabrication for structural steel and reinforced concrete structures in building and infrastructure projects.
Want more? Read on by downloading the full class handout.