A single idea can be expressed in unlimited ways. Because of this, the field of data representation and visualization is in a continuous state of flux and evolution. The scientific community in particular is constantly trying to find new, creative ways to more easily and accessibly organize and interpret scientific data. A data set may remain unchanged, but the number of ways in which data can be displayed, viewed, represented, and subsequently interpreted are virtually limitless. It is through the application of this multimodal analysis process that we are able to gain a well-rounded understanding of the information that we wish to understand.
In this review, we used SNP biomarker data for Alzheimer’s disease to construct a three-dimensional digital representation of the gene-gene interaction network. We then created a physical model of the network using a 3-D printer to explore the advantages of using one or both of these visualization mediums. We suggest that useful information may be lost in the translation from physical structures to digital representations, and therefore propose that the use of a corporeal gene-gene interaction network model to supplement the digital SNPAttractor visualization software may inspire additional insights into the meaning and interpretation of the genetic network. In addition to the kind of biological or statistical networks presented in our case study, there are numerous other potential uses of this technology in biological data mining. For example, it might be helpful to print actual models. One could imagine printing a decision tree model derived from a source of big data. It might be interesting to print a phylogenetic tree or an ontology. An interesting challenge would be to print information visualizations such as scatterplots, barplots, boxplots, or even heat maps.
3-D visualization software and use of our digital genetic network offers many benefits but lacks intuitiveness and may therefore withhold important and possibly idea-stimulating information. We suggest that there are differences in the ability to intuitively recognize, understand, and subsequently interpret digital versus physical 3-D information. As physical beings in a three-dimensional world, we have evolved to expertly interpret our physical surroundings. Therefore, recognizing and understanding physical objects is, to a certain degree, intuitive. A problem arises, however, when we are asked to understand a digital representation of a physical object. Until recently, with the invention of computers and iPads, there have been no evolutionary pressures to hone our ability to interpret 3-D information through 2-D mediums, and therefore, such interpretation is unintuitive. A study by Lowrie  exposes the unintuitive nature of interpreting simulated 3-D objects. Lowrie investigated the ability of children to interpret screen-based images on the computer and relate them to real-world environments. Of 6 children, only 2 were able to find relationships between simulated 3-D and real-world 3-D environments. Lowrie goes further to suggest that the ability to infer relationships between simulated 3-D and actual 3-D environments can be enhanced through the construction and manipulation of 3-D models, a finding that demonstrates both the more innate nature of handling physical objects as well as the value of supplementing digital information with a physical counterpart. Other institutions reflect these ideas as well. For example, Kawakami  claims that because of the size and complexity of his molecular structures, digital model generation is difficult and peer discussions are laborious. In answer to this issue, Kawakami developed a physical, interactive protein model using 3-D printing technology that allows users to see, touch, and test ideas more easily and can be used in conjunction with digital applications. These examples highlight the additional intuitive benefit of supplementing digital visualizations with physical models.
How have we evolved to expertly interpret physical stimuli, and how are these modes of stimulus sensation and perception altered when we translate a physical object into the digital realm? Quite simply, we have evolved to sense and perceive real-world stimuli through five sensory modalities – sight, smell, taste, touch, and hearing. By translating a physical structure into the digital realm, we instantly eliminate the option to utilize four of these five senses. The efficacy of the remaining sensory modality – vision – is additionally drastically reduced during this translation. Visual resolution of the surrounding three-dimensional world is achieved through both stereopsis, the fusion of binocular images derived from retinal disparities to accurately communicate depth information , and monocular information, a more general but less accurate visual-perceptual method . Although both of our digital and physical genetic network models are determined to be “three dimensional,” the difference resides in the method of presentation. While the physical network inhabits our tangible world, the digital network is presented through a 2-D medium – the computer screen. Therefore, interpretation of the former permits the use of stereopsis while interpretation of the latter is reliant on monocular cues, suggesting that we may lose valuable information in the translation from interpretation of physical to digital 3-D data. With a data set where complex relationships are expressed in 3-D space, the ability to accurately interpret these relationships is vital. We therefore suggest that 3-D information may be more accurately perceived through the handling and examination of a physical structure as compared to its digital counterpart.
Digital visualization provides many capabilities that physical models cannot, such as the ability to view various spatial arrangements and consequences of parameter change in real-time. However, we suggest that there may be advantages unique to experiencing this data through a physical medium that should not be ignored. Interpretation of a 3-D data set is both more intuitive and more accurate when experienced in the physical world as compared to the digital realm. Additionally, handling a physical model naturally stimulates discussion in group settings, allowing for new theories and ideas to be born. We therefore suggest that by supplementing our digital visualization techniques with a physical, tangible counterpart produced by 3-D printing technology, we may unlock ideas and insights about the data previously unattainable with only a digital model. Future studies should explore concept interpretation and comprehension in educational environments with use of digital visualizations with and without a supplementary physical counterpart.
In addition to these possible advantages of 3-D printing data objects it is also important to discuss some of the limitations of this technology. First, 3-D printing creates a static object that may not accurately represent the dynamics inherent in biological data. Once the object is printed it is fixed in time and space with one set of colors and shapes. In this sense, the visual display offered by a computer may be advantageous for many types of data and research results. It is worth noting that this disadvantage may be partially addressed by new 4-D technology that is able to print dynamic objects using thermal hydrogels . Second, 3-D printing is likely to have size constraints. For example, it is unlikely that the typical hairball that is characteristic of large complex biological networks will be amenable to 3-D printing at the level of detail that is necessary to handle and interpret the object. Finally, it will be important to compare the usefulness of 3-D printed objects to other emerging technologies such as holograms that could be interacted with through haptic devices. It is our hope that this review will motivate formal scientific studies to evaluate the usefulness of 3-D printing and some of the other technologies mentioned for augmenting biological data mining.
Get real time update about this post categories directly on your device, subscribe now.