With increasing focus on computation and data-driven solutions, architects and designers need to become more comfortable with data manipulation and preparation. One of the biggest risks that comes from the increased use of tools such as Dynamo is that a large amount of data manipulation is defined in the scripts themselves. This locks the data business logic into the script, which is accessible by only a subset of users. We have found that, with large projects, it’s better to utilize datacentric tools for formatting and manipulation prior to feeding the data into Dynamo. This affords the possibility that the final data set can be used to drive multiple outputs. In this class, we will focus on the key attributes of a good data process, using a recent project as an example. We will take you through the cleaning process, as well as the script development in Dynamo and the graphic production process in Revit software that was complimentary to other data-visualization tools. We’ll also review the decision-making process that went into our choice between using Revit or FormIt software.
- Learn how to assess raw data and identify opportunities to normalize the data for consumption by tools
- Learn how to define a process for converting raw data to a usable data model
- Discover how users can migrate data processing out of their scripts and into tools better suited for data manipulation
- Learn how to build a process for generating a variety of graphic outputs based on data