1. On pg. 6, under section 3.1.2, the authors discuss 'the role of learning in auditory display efficacy'. They seem to be of the opinion that sonification is easy to learn requiring only minimum training, but how can they truly claim this when auditory and critical listening skills are something that needs to be developed over time? Audio engineers and sound technicians would already have the general skill sets to quickly understand sonification, but for the average researcher or user, isn't there an inherent need for a strong, auditory foundation?
2. Is sonification truly only useful for large data sets, or is it a technique that could be scaled down, too? Is the concept of scalability that scaling up is the issue, but scaling down is trivial? Or would bugs and issues arise from such a situation? Does sonification really warrant becoming its own discipline, or is the most effective when used with data visualization and other techniques?
3. In an undergraduate anthropology class, we once discussed how women tend to be better listeners and multitaskers, due to certain evolutionary designs. I wonder if there is any evidence if women pick up on sonification quicker than men and are able to more easily understand sonification while simultaneously engaged in other activities?
1. What are the accessibility issues in sonification? In visual presentation of information, there are problems, such as color blindness. Is there any similar issue in sonification that some groups of people can only perceive a certain range of sound frequency?
2. What are the cultural differences in the perception of sonification? And what are the main factors in the differences? Do these differences have a significant correlation with the differences of languages of different cultures?
3. Haptic study is another branch in HCI which aims to convey information by haptic senses. It is very similar to sonification in many ways, just taking advantages of different senses. So what are the collaborations between these two fields, and what lessons can sonification learn from haptic study?
1. I wonder how this technology could be used to better understand animal "speech," such as whale songs, dolphin communications, etc.
2. For that matter, despite sonification's focus on "non-speech sounds," how might this research be applied to better understanding and use of human speech, such as automatic transcription - e.g. the issues we encountered and discussed previously when considering how best to utilize oral history audio recordings with limited human voice recognition technology? According to the authors, sonification is intended to be a broadly interdisciplinary field, so why not include human speech issues?
3. The possibilities for visually impaired users to apprehend data in new ways, as briefly touched on on page 11, is really fascinating. I'd like to learn more about this. For instance, "Meijer (1992) has developed means for scanning arbitrary visual images and presenting them in sound." This sounds fascinating - what is this?! And it also makes me ask, what ways might we approach - and technologies might we develop - that allow various groups of users (including the conventionally abled) to consider data in new ways, to "visualize" data through different senses?
1. The authors claim that “some believe that we are approaching the limits of users’ ability to interpret/comprehend visual information” (4). Who believes this and where is the evidence? Have multiple studies been conducted about this? What might these limits look like and is there a way around them?
2. The authors discuss the lack of funding surrounding sonification research and state that “university administrators and funding agencies often fail to appreciate the necessity of interdisciplinary funding” (15). They then compare the current problems sonification is experiencing to problems faced by early visualization studies. However, if sonification is so effective, why is all the money in visualization? Is it just a matter of which is more established? Is it just a matter of sonification being interdisciplinary?
3. The authors cite numerous studies that prove how effective sonification is, but do these studies really prove that sonification is more effective than visualization? What about the deaf – they are more than able to compensate using visual and tactile methods. Is it possible that sonification is better? Is this just another area of research sonification would explore if it had the money?
1 The author mentions that audio's natural integrative properties are increasingly being proven suitable for presenting high-dimensional data without creating information overload for users. Why? How does the author conclude this? What are audio's natural integrative properties?
2 The author mentions Wickens' s research (7) that has demonstrated that sound can enhance a visual or haptic display by providing an additional channel of information. How exactly that kind of channel of information effects us? Except for feelings, does that have difference from visual information?
3 What exactly sonification can learn from visualization? Since both emitter and media are different, even the components of information are different, how could we apply visualization into sonification in practical way?
As they champion the use of sonification, Kramer et al. state quite plainly that we may in fact be “approaching the limits of user’s abilities to interpret and comprehend visual information”(4). With the advent of technology allowing for better graphic processing and even 3D environments, the potential for visualization seems to be moving along nicely. What exactly are the limits spoken about?
The article mentions how sound can augment visualizations and even haptic feedback by providing another avenue for information presentation. If this is the case would the best method for studying sonification be to focus on using it in conjunction with visuals and tactile feedback? Or would focusing solely on sonification be the better approach? I almost feel as though a “sensory” field would be beneficial for sonification along with visualization and haptic methods of data presentation.
According to Kramer et al., the public and researchers are “visually biased”(18). Is this really the case or is it just a matter of ease for us to process information by examining it visually? In a way this brings up the “a picture is worth a thousand words” situation where the vast majority of people experience the world through their eyes despite the sometimes narrow focus relying on vision forces upon us.
1. The authors mentioned that "By now it is clear that sonification works and can be very useful. What is not so clear is how to go about designing a successful sonification." I think the same tune can apply to every thing around us. Why and how much and in what contexts does sonification work? The authors didn't talk much about them.
2. I've just met an issue concerning sonification and I'd like to put it here. In another class, my teammates and I went to National Instrument and interviewed some of employees and recorded the conversation with an iPhone app. After back to the campus, I hoped to transcript the conversation into text. However, not a single software was capable of doing so well enough. Is this issue in the sonification research field? If yes, how to implement?
3. I'm especially interested in the recognition of speech but the authors of the paper didn't discuss much. What is the technical basis of voice recognition? Machine learning?
1. The report mentions that combing visualization and sonification do not necessarily provide a good way to analyze data. Are there any types of data that would work well to analyze by utilizing both techniques?
2. I am wondering the benefits of using sonification to hear the change of a dataset over seeing the changes through a graph showing the frequencies of sound. Would displaying frequencies on a graph be the same as listening to the sound? Does one method work better for different kinds of data?
3. Humans all have a different spectrum of sound frequencies that they can hear, and this can change as they get older. How does sonification account for the differences in humans’ hearing? Will some people be better at understanding certain methods of sonification over others?
1. Are certain file formats better for storing and enhancing the quality of sonification data? Is there a current standard for sonification sounds? Is a standard possible seeing as there are so many formats of audio?
2. There is the idea that if a person loses a sense ex.) hearing, sight, etc, the body can enhance its other senses to make up for that loss. Has there been any sonification testing been done on say, the sighted and blind to see if there is greater information retention or loss between those two groups? Which group has a better ear for audio, or is there no difference?
3. Page 15, “Until we understand more about what makes sonification successful, the field will remain mired in ad hoc trial and error design.” At the end of all this, what real benefit does sonification have for research and data retention? What would be the end goal if this field received proper funding? Would sonification be most useful in fields like speech pathology or speech therapy?
1. Sonification is the process of converting big data into auditory data. Can this be converted into visual data? Is visual data more appealing that auditory data in terms of graphics and charts?
2. The author points out that “Although we cannot perceive a lengthy sonification "at a glance," a multimodal approach can tap the positive features of each of the component sensory domains. Research has demonstrated that sound can enhance a visual or haptic display by providing an additional channel of information” . In this context, is sonification done to assist understanding of visual displays?
3. Sonification also involves issues of representation, task dependency, and user-interface interaction. What kind of interface is necessary for sonification? Do audio displays require displays on the user end?
1. In consideration of the evolution of sonification design (pg. 14), I can't help but wonder about the potential use within various cultural communities. Could sonification technology be modified such that it could account for or reflect differences in tonal perceptions cross-culturally? 2. The need to unify the field of sonification research is clearly articulated, however in that, I am curious as to how large the sonification research community is? And furthermore, how has the body of dedicated sonification research evolved over time? 3. The authors present the issues of portability, flexibility, and integrability as key for the development of sonification research (pg. 11). Do these same issues hold true when looking to expand the use of sonification outside of the research community, and specifically, into the public realm?
1. The article brought up some interesting questions about the use of sonification in presenting large datasets but I felt that they failed to give a good view of how an audible system would function in the real world. Do the authors believe that this technique would be effective because it would be easier to spot small changes? I'm a little bit skeptical that this would be effective in real world use.
2. The authors discuss the possibility for multiple systems to alert the operator without the need to be visually present. While in principle this would be excellent I don't get the sense that it would be the most effective. How do operators deal with manipulating audible datasets or sharing them with others?
3. While not discussed in the article, I’m interested in how each particular persons hearing would affect the types of data points that they could comprehend. As people age they often lose the ability to hear certain frequency’s and this would seem like a logistical nightmare in terms of being able to parse out important information, especially if it is minute.
1 - I'm having a lot of difficulty wrapping my head around non-verbal sound as a way to communicate information and ultimately be cited in a paper or used to embody research. Perhaps I'm just hung up on the value of language, but in order to increase the validity of sonification within different fields, don't we first need to change the way the user interacts with research so they can appropriately use/understand sonification to enhance their own work? Who has to change first - the users or the field? Given that (in most cases) sounds within a certain range tend to indicate an "emergency" (e.g. fire alarms, those emergency text alert systems), how can we increase user interest in sonification when they've almost been conditioned to have a 'flight or fight' or 'damn that's obnoxious' response to sound?
2 - How can we use sonification with an awareness of auditory variance among individuals? While most individuals can hear in a range of ~12-20 Hz, how do we account for individuals with hearing loss, or with incredibly sensitive hearing and are prone to sensory overload? In a similar vein, given the neuroplasticity of small children (under 9 or 10) and their ability to distinguish tonal differences more easily than adults, how could sonification be used in children's classrooms or education as a tool for teaching?
3 - To accommodate users with visual impairments who are unable to use visualization, is there a way to 'translate' visual data into an auditory display? How would a sonification tool be 'customized' to meet the needs of a visually impaired person or a person with years of cilia damage from listening to their earbuds at full volume? Would the translation of various sonifications to different pitch levels (much like the third area of research identified in the paper) convey the same type of information each time, or would something get lost when the tonal range changes?
1. I was pleased and relieved when the authors finally gave real-world examples of sonification and its application given that the concept as explained up until then was hard for me to grasp, but just out of curiosity, does anyone have any other, possibly less specialized, examples of this? They give the impression that it could be used for a wide array of things, and within a wide array of disciplines, so I’d be interested in finding out other, maybe less obscure or more everyday, applications of sonification.
2. We have read these three papers now on visualization and sonification, and this paper in particular discusses the interdisciplinary aspect of sonification studies. Are these both things which the Information Science field is beginning to tackle? What are concrete examples of their application in this field, if any? Lastly, considering the interdisciplinary nature of sonification and it’s background in so many different academic areas, does information science seem to be a ripe place for its study?
3. I’m especially interested in sonification as it relates to the visually impaired and would be interested to hear more on this topic.
1. I agree that sonification is likely useful for comprehending or monitoring complex temporal data, but the reason the author gave seems to be unreliable. The author said: “fast-changing or transient data that might be blurred or completely missed by visual displays may be easily detectable in even a primitive, but well-designed auditory display.” However, since the speed of light is much more faster than the speed of sound, I think it is not a good comparison.
2. How to evaluate the quality of sonification? Sonification is a new field of study. If scholars want to provide a curriculum for teaching sonification, the evaluation of sonification’s quality could be a significant issue. Are there any indicators to test the validity of the data sonification?
3. The authors mention: "highly salient musical patterns can be easily recognized and recalled even when subjected to radical transformations so that this property can be used to enhance visual-based data mining and pattern search tasks". I wonder how could sonification enhance visual-based data mining and how can they match to each other. How can sonification analysis data and provide information and even knowledge?
1) This article made a compelling argument for the centralization of sonification as an interdisciplinary field, but it could have provided more concrete examples of potential applications of data sonification. The existing applications for sonification seem to fall into two categories: sonification of data that was already auditory in some way (e.g. seismology), or the use of sound to get the user’s attention when the data suddenly changes in real-time (e.g. the Geiger counter and the oxygen level monitor). I was not left with a very good understanding of how sonification could be applied outside of the existing applications. What are some other examples?
2) I realized while reading this article that I’ve already encountered data sonification outside the examples given: namely, the “music” created from the data of the Higgs-Boson particle. http://www.livescience.com/21521-higgs-boson-music.html This was undoubtedly interesting, but it was not particularly clear to a non-physicist how the data was symbolized as music, or what exactly one could learn by “listening” to the Higgs-Boson. Is this confusion an example of one of the problems with sonification still being a very young field, or is it simply a case of the data being sonified making more sense to physicists?
3) The article argues that sonification systems have had to be created ad-hoc in response to a need rather than stemming from centralized sonification theory. While I do see the value of a foundational theory, it seems like it would be very difficult to create that theory without having a solid and nuanced understanding of needs of specific data and implementations of the technology. How would the article’s authors suggest balancing the needs of the field (for recognition, theoretical rigor, etc.) with the needs of the information being processed?
1. The Authors assume that hearing is well designed to discriminate between periodic and aperiodic event and can detect small changes in the frequency of continuous signals. So this points to a distinct advantage of auditory over visual displays. However, based on our living experience, auditory often lead to misunderstanding because sometimes we don’t get the right word through listening. How could it be explained?
2. This article try to prove the efficiency about sonification. It is easy to understand that a graphic that occupied a little space on the hard disk can convey a lot of information, which seem to be much efficient than sonification. Is the evidence sufficient?
3. It seems that it is still in lack of standards in this area. As we know, there are different quality of sound according to Bit Rate of the sample, and higher bit rate usually mean much larger file size. So what the quality is acceptable for our use?
1. In our advanced usability class there was an option to work on a project involving analyzing bird sounds. I wonder if this technology could be used with analyzing animal sounds/communication?
2. I wonder if "sound experts" (those with higher sense of sound recognition) would be able to use this tool to a greater extent than those with normal audio recognition?
3. is it reasonable to think that complex data will produce complex audio? Would we be able to notice the difference if they are shuttle? Would this limit what type of data could be used?
1. The data we derive from sound is often subtle and more varied than we realize, as we process signals subconsciously. The authors give two overt examples (the Geiger counter and the Quantum Whistle) of sonification, but would smaller signals, such as shifts in tone as one moves through data, be helpful?
2. We are constantly listening for alerts and messages, with ringtones and email notifications and many other sounds telling us things. All of this can get pretty noisy. Would sonification add to this ambient level in a positive way in conveying its information?
3. This doesn’t seem to be discussed in the paper, but what are the difficulties of employing sonification in a group setting? One can wear headphones to prevent sound from transmitting to officemates but then other communication is impaired.
1. The author spends a lot of time touting people's hearing and auditory skills, as well as the cost efficiency of sonification tools (both hardware and software) and the freedom it gives us to be looking or doing something else. While I don't disagree that some people retain material better through hearing it, I feel there's a lot being done to gloss over either folks with hearing problems, or some level of attention deficit. For instance, I've learned audiobooks are wasted on me, because I tune out what's being said after a period of time, and before you know it my work is done, but also 5 chapters have passed that I don't even remember hearing.
2. In terms of digital storage space, wouldn't one distinct disadvantage to sonification be the tremendous file sizes needed for the data to be considered a good audible quality? While yes, you could make a secondary copy at a lower bitrate to actually listen to, but wouldn't you still needed that higher bitrate file to make it from?
3. The section on Sonification and Design and the intersections between visual and audio reminded me a lot of Tanya Clement's HiPSTAS Project: http://blogs.ischool.utexas.edu/hipstas/
1.Visualization is widely used in practice as it provides a direct-viewing impression of data through charts and tables, and can be used among various fields or industries. As a contrast, what are the potential advantages of sonification techniques when used into practice, such as economic or social activities?
2. In the introduction part, page 4, the author defined “sonification” as the use of nonspeech audio to convey information. What does nonspeech audio refer to? For example, is “voice navigation” a sonification tool?
3. In page 13 and 14, the article provided some successful examples of using sonification to present data. However, the author did not mention how to record and store the information generated by sonification techniques.
1. In this article the authors propose the increased use of sonification as a method of representing data. They argue that this is because we are “approaching the limits of users’ abilities to interpret and comprehend visual information.” Do you agree that we are reaching the limits of visualization or do you think that new methods of visualization could simplify data more making it easier for a user to understand? 2. In this article the authors state that there are some methods of sonification that require user training to achieve efficiency. They use the training for sonar operators as an example. Does this need for training for some methods make it more difficult to used sonified data? Can you think of examples of visualized data that requires training to use? 3. The authors of this article discuss three major issues that the field of sonification needs to address. These issues are the need for the establishment of sonification as a discipline, the need for a medium of communicating sonification research across the sonification community, and the need for an established curriculum for sonification education. Which of these three issues do you think is most pressing to the sonification community?
1. Something that has always intrigued me about sonification is the ability for people to learn to react to audio stimulus without thinking just like any other human sense. It's also interesting how under utilized sonification is. There are tons of potential uses for sonification for situations where visual or touch senses are otherwise being used. When it is used well though, it is a very unobtrusive method for conveying information.
2. "...the need to comprehend an abundance of data..." I agree wholeheartedly that this statement makes sonification very relevant. In our current time, one's potential to access information has never been easier and if someone wishes to access that much data, they will have to use more than just one or two of their sense.
3. Of the papers that we have read for this week, this is my favorite. This is an area of research that I find particularly interesting and I agree with the authors conclusion that the area of sonifcation should be researched more. Perhaps my motivation for wanting sonifcation researched more is selfish but in my life I can't think of any ways in which this done to convey information much more complex than telling me my door is ajar.
1. There are three global issues: recognition of sonification, communication and curriculum. For the recognition of sonification, I wonder whether the research objects contain animals. For the curriculum, are there any universities teach sonification?
2. ‘Sonification is the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation.’ I have a question. If sonification use nonspeech audio to convey information, how could it change data relations to perceived relations?
3. ‘We still do not really know how to design a sonification that we know in advance will work well for a specific task.’ Why? What is the difficulty?
1. In the first sample research question, the authors discuss natural versus synthesized sounds claiming that natural sound lacks discernible parameters, Is there not some type of audio equipment that could eventually determine parameters of natural sound? Also what parameters are the authors referring to?
2. Sonification seems to be a viable tool for detecting patterns in data, but the notion of sonification for the purpose of data mining is a much more difficult concept to grasp. How would this work? Please explain.
3. The authors feel that visualization research is waning and that sonification now deserves all of the attention and funding. This field seems to be so much more limited than visualization for many reasons, though. just in comparing the two human senses - hearing and sight - sight seems to have much more impact than hearing. Also in the human sense, hearing seems to be lost more quickly than sight. Sight may become blurred but is not usually lost almost entirely as is hearing. Finally cost of audio equipment appears to be a severely limiting factor as well.
1. On pg. 6, under section 3.1.2, the authors discuss 'the role of learning in auditory display efficacy'. They seem to be of the opinion that sonification is easy to learn requiring only minimum training, but how can they truly claim this when auditory and critical listening skills are something that needs to be developed over time? Audio engineers and sound technicians would already have the general skill sets to quickly understand sonification, but for the average researcher or user, isn't there an inherent need for a strong, auditory foundation?
ReplyDelete2. Is sonification truly only useful for large data sets, or is it a technique that could be scaled down, too? Is the concept of scalability that scaling up is the issue, but scaling down is trivial? Or would bugs and issues arise from such a situation? Does sonification really warrant becoming its own discipline, or is the most effective when used with data visualization and other techniques?
3. In an undergraduate anthropology class, we once discussed how women tend to be better listeners and multitaskers, due to certain evolutionary designs. I wonder if there is any evidence if women pick up on sonification quicker than men and are able to more easily understand sonification while simultaneously engaged in other activities?
1. What are the accessibility issues in sonification? In visual presentation of information, there are problems, such as color blindness. Is there any similar issue in sonification that some groups of people can only perceive a certain range of sound frequency?
ReplyDelete2. What are the cultural differences in the perception of sonification? And what are the main factors in the differences? Do these differences have a significant correlation with the differences of languages of different cultures?
3. Haptic study is another branch in HCI which aims to convey information by haptic senses. It is very similar to sonification in many ways, just taking advantages of different senses. So what are the collaborations between these two fields, and what lessons can sonification learn from haptic study?
1. I wonder how this technology could be used to better understand animal "speech," such as whale songs, dolphin communications, etc.
ReplyDelete2. For that matter, despite sonification's focus on "non-speech sounds," how might this research be applied to better understanding and use of human speech, such as automatic transcription - e.g. the issues we encountered and discussed previously when considering how best to utilize oral history audio recordings with limited human voice recognition technology? According to the authors, sonification is intended to be a broadly interdisciplinary field, so why not include human speech issues?
3. The possibilities for visually impaired users to apprehend data in new ways, as briefly touched on on page 11, is really fascinating. I'd like to learn more about this. For instance, "Meijer (1992) has developed means for scanning arbitrary visual images and presenting them in sound." This sounds fascinating - what is this?! And it also makes me ask, what ways might we approach - and technologies might we develop - that allow various groups of users (including the conventionally abled) to consider data in new ways, to "visualize" data through different senses?
1. The authors claim that “some believe that we are approaching the limits of users’ ability to interpret/comprehend visual information” (4). Who believes this and where is the evidence? Have multiple studies been conducted about this? What might these limits look like and is there a way around them?
ReplyDelete2. The authors discuss the lack of funding surrounding sonification research and state that “university administrators and funding agencies often fail to appreciate the necessity of interdisciplinary funding” (15). They then compare the current problems sonification is experiencing to problems faced by early visualization studies. However, if sonification is so effective, why is all the money in visualization? Is it just a matter of which is more established? Is it just a matter of sonification being interdisciplinary?
3. The authors cite numerous studies that prove how effective sonification is, but do these studies really prove that sonification is more effective than visualization? What about the deaf – they are more than able to compensate using visual and tactile methods. Is it possible that sonification is better? Is this just another area of research sonification would explore if it had the money?
1 The author mentions that audio's natural integrative properties are increasingly being proven suitable for presenting high-dimensional data without creating information overload for users. Why? How does the author conclude this? What are audio's natural integrative properties?
ReplyDelete2 The author mentions Wickens' s research (7) that has demonstrated that sound can enhance a visual or haptic display by providing an additional channel of information. How exactly that kind of channel of information effects us? Except for feelings, does that have difference from visual information?
3 What exactly sonification can learn from visualization? Since both emitter and media are different, even the components of information are different, how could we apply visualization into sonification in practical way?
As they champion the use of sonification, Kramer et al. state quite plainly that we may in fact be “approaching the limits of user’s abilities to interpret and comprehend visual information”(4). With the advent of technology allowing for better graphic processing and even 3D environments, the potential for visualization seems to be moving along nicely. What exactly are the limits spoken about?
ReplyDeleteThe article mentions how sound can augment visualizations and even haptic feedback by providing another avenue for information presentation. If this is the case would the best method for studying sonification be to focus on using it in conjunction with visuals and tactile feedback? Or would focusing solely on sonification be the better approach? I almost feel as though a “sensory” field would be beneficial for sonification along with visualization and haptic methods of data presentation.
According to Kramer et al., the public and researchers are “visually biased”(18). Is this really the case or is it just a matter of ease for us to process information by examining it visually? In a way this brings up the “a picture is worth a thousand words” situation where the vast majority of people experience the world through their eyes despite the sometimes narrow focus relying on vision forces upon us.
1. The authors mentioned that "By now it is clear that sonification works and can be very useful. What is not so clear is how to
ReplyDeletego about designing a successful sonification." I think the same tune can apply to every thing around us. Why and how much and in what contexts does sonification work? The authors didn't talk much about them.
2. I've just met an issue concerning sonification and I'd like to put it here. In another class, my teammates and I went to National Instrument and interviewed some of employees and recorded the conversation with an iPhone app. After back to the campus, I hoped to transcript the conversation into text. However, not a single software was capable of doing so well enough. Is this issue in the sonification research field? If yes, how to implement?
3. I'm especially interested in the recognition of speech but the authors of the paper didn't discuss much. What is the technical basis of voice recognition? Machine learning?
1. The report mentions that combing visualization and sonification do not necessarily provide a good way to analyze data. Are there any types of data that would work well to analyze by utilizing both techniques?
ReplyDelete2. I am wondering the benefits of using sonification to hear the change of a dataset over seeing the changes through a graph showing the frequencies of sound. Would displaying frequencies on a graph be the same as listening to the sound? Does one method work better for different kinds of data?
3. Humans all have a different spectrum of sound frequencies that they can hear, and this can change as they get older. How does sonification account for the differences in humans’ hearing? Will some people be better at understanding certain methods of sonification over others?
1. Are certain file formats better for storing and enhancing the quality of sonification data? Is there a current standard for sonification sounds? Is a standard possible seeing as there are so many formats of audio?
ReplyDelete2. There is the idea that if a person loses a sense ex.) hearing, sight, etc, the body can enhance its other senses to make up for that loss. Has there been any sonification testing been done on say, the sighted and blind to see if there is greater information retention or loss between those two groups? Which group has a better ear for audio, or is there no difference?
3. Page 15, “Until we understand more about what makes sonification successful, the field will remain mired in ad hoc trial and error design.” At the end of all this, what real benefit does sonification have for research and data retention? What would be the end goal if this field received proper funding? Would sonification be most useful in fields like speech pathology or speech therapy?
1. Sonification is the process of converting big data into auditory data. Can this be converted into visual data? Is visual data more appealing that auditory data in terms of graphics and charts?
ReplyDelete2. The author points out that “Although we cannot perceive a lengthy sonification "at a glance," a multimodal approach can tap the positive features of each of the component sensory domains. Research has demonstrated that sound can enhance a visual or haptic display by providing an additional channel of information” . In this context, is sonification done to assist understanding of visual displays?
3. Sonification also involves issues of representation, task dependency, and user-interface interaction. What kind of interface is necessary for sonification? Do audio displays require displays on the user end?
1. In consideration of the evolution of sonification design (pg. 14), I can't help but wonder about the potential use within various cultural communities. Could sonification technology be modified such that it could account for or reflect differences in tonal perceptions cross-culturally?
ReplyDelete2. The need to unify the field of sonification research is clearly articulated, however in that, I am curious as to how large the sonification research community is? And furthermore, how has the body of dedicated sonification research evolved over time?
3. The authors present the issues of portability, flexibility, and integrability as key for the development of sonification research (pg. 11). Do these same issues hold true when looking to expand the use of sonification outside of the research community, and specifically, into the public realm?
1. The article brought up some interesting questions about the use of sonification in presenting large datasets but I felt that they failed to give a good view of how an audible system would function in the real world. Do the authors believe that this technique would be effective because it would be easier to spot small changes? I'm a little bit skeptical that this would be effective in real world use.
ReplyDelete2. The authors discuss the possibility for multiple systems to alert the operator without the need to be visually present. While in principle this would be excellent I don't get the sense that it would be the most effective. How do operators deal with manipulating audible datasets or sharing them with others?
3. While not discussed in the article, I’m interested in how each particular persons hearing would affect the types of data points that they could comprehend. As people age they often lose the ability to hear certain frequency’s and this would seem like a logistical nightmare in terms of being able to parse out important information, especially if it is minute.
1 - I'm having a lot of difficulty wrapping my head around non-verbal sound as a way to communicate information and ultimately be cited in a paper or used to embody research. Perhaps I'm just hung up on the value of language, but in order to increase the validity of sonification within different fields, don't we first need to change the way the user interacts with research so they can appropriately use/understand sonification to enhance their own work? Who has to change first - the users or the field? Given that (in most cases) sounds within a certain range tend to indicate an "emergency" (e.g. fire alarms, those emergency text alert systems), how can we increase user interest in sonification when they've almost been conditioned to have a 'flight or fight' or 'damn that's obnoxious' response to sound?
ReplyDelete2 - How can we use sonification with an awareness of auditory variance among individuals? While most individuals can hear in a range of ~12-20 Hz, how do we account for individuals with hearing loss, or with incredibly sensitive hearing and are prone to sensory overload? In a similar vein, given the neuroplasticity of small children (under 9 or 10) and their ability to distinguish tonal differences more easily than adults, how could sonification be used in children's classrooms or education as a tool for teaching?
3 - To accommodate users with visual impairments who are unable to use visualization, is there a way to 'translate' visual data into an auditory display? How would a sonification tool be 'customized' to meet the needs of a visually impaired person or a person with years of cilia damage from listening to their earbuds at full volume? Would the translation of various sonifications to different pitch levels (much like the third area of research identified in the paper) convey the same type of information each time, or would something get lost when the tonal range changes?
1. I was pleased and relieved when the authors finally gave real-world examples of sonification and its application given that the concept as explained up until then was hard for me to grasp, but just out of curiosity, does anyone have any other, possibly less specialized, examples of this? They give the impression that it could be used for a wide array of things, and within a wide array of disciplines, so I’d be interested in finding out other, maybe less obscure or more everyday, applications of sonification.
ReplyDelete2. We have read these three papers now on visualization and sonification, and this paper in particular discusses the interdisciplinary aspect of sonification studies. Are these both things which the Information Science field is beginning to tackle? What are concrete examples of their application in this field, if any? Lastly, considering the interdisciplinary nature of sonification and it’s background in so many different academic areas, does information science seem to be a ripe place for its study?
3. I’m especially interested in sonification as it relates to the visually impaired and would be interested to hear more on this topic.
1. I agree that sonification is likely useful for comprehending or monitoring complex temporal data, but the reason the author gave seems to be unreliable. The author said: “fast-changing or transient data that might be blurred or completely missed by visual displays may be easily detectable in even a primitive, but well-designed auditory display.” However, since the speed of light is much more faster than the speed of sound, I think it is not a good comparison.
ReplyDelete2. How to evaluate the quality of sonification? Sonification is a new field of study. If scholars want to provide a curriculum for teaching sonification, the evaluation of sonification’s quality could be a significant issue. Are there any indicators to test the validity of the data sonification?
3. The authors mention: "highly salient musical patterns can be easily recognized and recalled even when subjected to radical transformations so that this property can be used to enhance visual-based data mining and pattern search tasks". I wonder how could sonification enhance visual-based data mining and how can they match to each other. How can sonification analysis data and provide information and even knowledge?
1) This article made a compelling argument for the centralization of sonification as an interdisciplinary field, but it could have provided more concrete examples of potential applications of data sonification. The existing applications for sonification seem to fall into two categories: sonification of data that was already auditory in some way (e.g. seismology), or the use of sound to get the user’s attention when the data suddenly changes in real-time (e.g. the Geiger counter and the oxygen level monitor). I was not left with a very good understanding of how sonification could be applied outside of the existing applications. What are some other examples?
ReplyDelete2) I realized while reading this article that I’ve already encountered data sonification outside the examples given: namely, the “music” created from the data of the Higgs-Boson particle. http://www.livescience.com/21521-higgs-boson-music.html This was undoubtedly interesting, but it was not particularly clear to a non-physicist how the data was symbolized as music, or what exactly one could learn by “listening” to the Higgs-Boson. Is this confusion an example of one of the problems with sonification still being a very young field, or is it simply a case of the data being sonified making more sense to physicists?
3) The article argues that sonification systems have had to be created ad-hoc in response to a need rather than stemming from centralized sonification theory. While I do see the value of a foundational theory, it seems like it would be very difficult to create that theory without having a solid and nuanced understanding of needs of specific data and implementations of the technology. How would the article’s authors suggest balancing the needs of the field (for recognition, theoretical rigor, etc.) with the needs of the information being processed?
1. The Authors assume that hearing is well designed to discriminate between periodic and aperiodic event and can detect small changes in the frequency of continuous signals. So this points to a distinct advantage of auditory over visual displays. However, based on our living experience, auditory often lead to misunderstanding because sometimes we don’t get the right word through listening. How could it be explained?
ReplyDelete2. This article try to prove the efficiency about sonification. It is easy to understand that a graphic that occupied a little space on the hard disk can convey a lot of information, which seem to be much efficient than sonification. Is the evidence sufficient?
3. It seems that it is still in lack of standards in this area. As we know, there are different quality of sound according to Bit Rate of the sample, and higher bit rate usually mean much larger file size. So what the quality is acceptable for our use?
1. In our advanced usability class there was an option to work on a project involving analyzing bird sounds. I wonder if this technology could be used with analyzing animal sounds/communication?
ReplyDelete2. I wonder if "sound experts" (those with higher sense of sound recognition) would be able to use this tool to a greater extent than those with normal audio recognition?
3. is it reasonable to think that complex data will produce complex audio? Would we be able to notice the difference if they are shuttle? Would this limit what type of data could be used?
1. The data we derive from sound is often subtle and more varied than we realize, as we process signals subconsciously. The authors give two overt examples (the Geiger counter and the Quantum Whistle) of sonification, but would smaller signals, such as shifts in tone as one moves through data, be helpful?
ReplyDelete2. We are constantly listening for alerts and messages, with ringtones and email notifications and many other sounds telling us things. All of this can get pretty noisy. Would sonification add to this ambient level in a positive way in conveying its information?
3. This doesn’t seem to be discussed in the paper, but what are the difficulties of employing sonification in a group setting? One can wear headphones to prevent sound from transmitting to officemates but then other communication is impaired.
1. The author spends a lot of time touting people's hearing and auditory skills, as well as the cost efficiency of sonification tools (both hardware and software) and the freedom it gives us to be looking or doing something else. While I don't disagree that some people retain material better through hearing it, I feel there's a lot being done to gloss over either folks with hearing problems, or some level of attention deficit. For instance, I've learned audiobooks are wasted on me, because I tune out what's being said after a period of time, and before you know it my work is done, but also 5 chapters have passed that I don't even remember hearing.
ReplyDelete2. In terms of digital storage space, wouldn't one distinct disadvantage to sonification be the tremendous file sizes needed for the data to be considered a good audible quality? While yes, you could make a secondary copy at a lower bitrate to actually listen to, but wouldn't you still needed that higher bitrate file to make it from?
3. The section on Sonification and Design and the intersections between visual and audio reminded me a lot of Tanya Clement's HiPSTAS Project: http://blogs.ischool.utexas.edu/hipstas/
1.Visualization is widely used in practice as it provides a direct-viewing impression of data through charts and tables, and can be used among various fields or industries. As a contrast, what are the potential advantages of sonification techniques when used into practice, such as economic or social activities?
ReplyDelete2. In the introduction part, page 4, the author defined “sonification” as the use of nonspeech audio to convey information. What does nonspeech audio refer to? For example, is “voice navigation” a sonification tool?
3. In page 13 and 14, the article provided some successful examples of using sonification to present data. However, the author did not mention how to record and store the information generated by sonification techniques.
1. In this article the authors propose the increased use of sonification as a method of representing data. They argue that this is because we are “approaching the limits of users’ abilities to interpret and comprehend visual information.” Do you agree that we are reaching the limits of visualization or do you think that new methods of visualization could simplify data more making it easier for a user to understand?
ReplyDelete2. In this article the authors state that there are some methods of sonification that require user training to achieve efficiency. They use the training for sonar operators as an example. Does this need for training for some methods make it more difficult to used sonified data? Can you think of examples of visualized data that requires training to use?
3. The authors of this article discuss three major issues that the field of sonification needs to address. These issues are the need for the establishment of sonification as a discipline, the need for a medium of communicating sonification research across the sonification community, and the need for an established curriculum for sonification education. Which of these three issues do you think is most pressing to the sonification community?
1. Something that has always intrigued me about sonification is the ability for people to learn to react to audio stimulus without thinking just like any other human sense. It's also interesting how under utilized sonification is. There are tons of potential uses for sonification for situations where visual or touch senses are otherwise being used. When it is used well though, it is a very unobtrusive method for conveying information.
ReplyDelete2. "...the need to comprehend an abundance of data..." I agree wholeheartedly that this statement makes sonification very relevant. In our current time, one's potential to access information has never been easier and if someone wishes to access that much data, they will have to use more than just one or two of their sense.
3. Of the papers that we have read for this week, this is my favorite. This is an area of research that I find particularly interesting and I agree with the authors conclusion that the area of sonifcation should be researched more. Perhaps my motivation for wanting sonifcation researched more is selfish but in my life I can't think of any ways in which this done to convey information much more complex than telling me my door is ajar.
1. There are three global issues: recognition of sonification, communication and curriculum. For the recognition of sonification, I wonder whether the research objects contain animals. For the curriculum, are there any universities teach sonification?
ReplyDelete2. ‘Sonification is the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation.’ I have a question. If sonification use nonspeech audio to convey information, how could it change data relations to perceived relations?
3. ‘We still do not really know how to design a sonification that we know in advance will work well for a specific task.’ Why? What is the difficulty?
1. In the first sample research question, the authors discuss natural versus synthesized sounds claiming that natural sound lacks discernible parameters, Is there not some type of audio equipment that could eventually determine parameters of natural sound? Also what parameters are the authors referring to?
ReplyDelete2. Sonification seems to be a viable tool for detecting patterns in data, but the notion of sonification for the purpose of data mining is a much more difficult concept to grasp. How would this work? Please explain.
3. The authors feel that visualization research is waning and that sonification now deserves all of the attention and funding. This field seems to be so much more limited than visualization for many reasons, though. just in comparing the two human senses - hearing and sight - sight seems to have much more impact than hearing. Also in the human sense, hearing seems to be lost more quickly than sight. Sight may become blurred but is not usually lost almost entirely as is hearing. Finally cost of audio equipment appears to be a severely limiting factor as well.