1. On pg. 1523 of the article, the authors say that they 'decided to take a descriptive rather than a prescriptive approach' to writing their paper. What are the values of a descriptive rather than prescriptive approach? If you have done the research, why not offer possible solutions with a prescriptive approach?
2. What exactly constitutes a visualization? Is it simply any information or data set that is presented for visual assessment? What sorts of projects would benefit from visualization vs. sonification and what would those data sets look like?
3. I would be curious to see the research on how people who have developed hyper-senses employ visualization and sonification tools. Is their aptitude for the use and learning of such programs dramatically higher than others? What is the application of such programs for people who are blind or deaf/hearing impaired?
1. The authors based their study on "seven evaluation scenarios most often encountered by visualization researchers." As someone completely new to this topic, I'd like to know a bit more about how these were determined, what other omitted scenarios were, and how they play out in reality.
2. Following on my first question, how might these scenarios be different - and thus the paper's findings be different - with a different sample set? They used EuroVis, InfoVis, IVS, and VAST (page 1523).
3. I don't really have a question about this, per se, but just in an attempt to think about this study in the framework(s) we've been learning about in class this semester, I'm interested in discussing their original coding tags (table 3) alongside the DIKW hierarchy, information life cycle, and various knowledge management flows. This is really the only way "in" to this topic for me, as it's a bit foreign thus far.
1. When talking about the scope of evaluation, the author outlines five stages in visualization development. It seems like there is only one iteration of visualization development, which from the predesign to redesign. Shouldn’t it be multiple iterations from a user centered design perspective? 2. In the section of "UWP: Evaluation Questions”, the author lists "In which daily activities should the visualization too be integrated? “ as one of evaluation question. I wonder what the significance of the connection with daily activities is? And why should visualization be integrated into these kind of activities? Shouldn’t the visualization be mainly responsible for capturing the activities? 3. For the methods of evaluation for different scenarios, the author talks about different techniques, which include field observation, interview, laboratory observation and etc. Would online questionnaire, survey and online remote testing, especially using crowdsourcing tools, be also a good method for evaluating the visualization?
1. In regard with Sheelagh Carpendale, one of the authors in this paper, I read a paper that she wrote as a joint author on last semester. In this paper, she described a new device by which a user can analyze the data in a spreadsheet on the screen of the device by intuitive gestures such as sketching columns and coordinates. Based upon the ways of classifying information visualization, what type of the scenario the device belongs to?
2. In 6.6.3, the authors raised four ways to evaluate user experience and they are "Informal evaluation", "Usability test", "Field observation" and "Laboratory questionnaire". However, it seemed to me that they left something behind. In an Internet company, the most common way to reflect the degree of user satisfaction about the website is to build dashboards and monitor the metrics defined previously. This action seemed not to belong to any methods above, right?
3. In the conclusion, the authors mentioned "we coded these evaluations according to 17 tags (table. 3)". While in table #3 in the paper, there are two tags are not included in any scenarios. Does it mean that the completeness of the classification of information visualization is not fully achieved?
1. Regarding UWP evaluation scenarios, the authors state that, “observational studies sometimes occur in laboratory settings in order to allow for more control of the study situation” (1525). While it’s true that conducting studies in a lab setting would provide more control, does it provide more accuracy? UWP relates to environment/work usability – would a lab setting really provide the right environment to evaluate it?
2. Based on the authors’ study, it seems that the evaluation scenarios are user-heavy. In the “Trends in Evaluation” section of their study, 67% of the 85% that made up the three “main” evaluation scenarios involved users. Can we expect evaluation methods to continue to be so involved with users? Will they change as visualization develops?
3. The authors repeatedly stipulate that their study is “descriptive rather than prescriptive” (1532). Though they did well in their goal of describing the different evaluation scenarios, how effective are these scenarios? Are lab studies more effective than field studies? What can we expect evaluation scenarios to look like in the future?
In the UWP scenario, Lam et al. mention a lack of research focused on studying people and their task processes is currently lacking. Does the almost implicit nature of this step prevent any indepth study from occurring? Particularly in an individual setting, since by its very nature groups would have more explicit aspects of interaction available for research in regards to visualization.
The questions posed under the CTV section present some interesting avenues for exploration. Does the third question, “Is the tool helpful in explaining and communicating concepts to third parties?”(1527), need to be present for all visualization tools or does that particular aspect only need to be dealt with in certain instances?
In the User Experience section, the informal evaluation mentions allowing people to play with a system. How exactly is this form of evaluation different from a usability test and field observation method? It seems like the informal evaluation mimics a lot of both of these methods.
1. In this paper, I notice that most of ways to present information are two-dimensional. I'm wondering how would three or more dimensional graphs help information visualization?
2. In the all VA discussed in the paper, I didn't see any HCI elements appearing with the graphs. I'm surprised because I thought directly and intuitively manipulating the diagrams to analyze the data is the future of VA. Did the authors only keep their eyes on current approaches of VA?
3. There are several visualization types are classified in the paper: Describing Data, Viewing Relationships, Picturing Data and so on. I think the types are determined both by user needs and technical constraints. In this sense, with the development of information tech, what kinds of new types will emerge in the near future?
1 The author believes that the final decision on appropriate methods should be made on case-by-case basis. Should not we analysis the scenarios based on different situation and need as well? And is there any probability that we may need to combine those scenarios?
2 In the introduction, the author mentions to categorize evaluation scenarios into those for understanding data analysis processes and those which evaluate visualizations themselves. And the scenarios for understanding data analysis are UWP, VDAR, CTV and CDA. The scenarios for understanding visualizations are UP, UE, and VA. While, on page 1524, the author says that they would describe the four visualization scenarios in Sections UWP, VDAR, CTV and CDA, followed by the three process scenarios in Sections UP, UE, and VA. This inconsequent is confusing. Which is process or visualization part?
3 How does formers researchers concludes these seven scenarios? Are there any more scenarios we should take consideration of?
1. I am wondering whether info graphics might count as a type of visualization. Info graphics usually contain written facts or numbers, but with graphs or other forms of visual analytics to display the information. I would say that generally the displays are made to be more visually appealing to get a point across over perhaps more accurately proportional displays. Infographics have become very popular recently on social media sites. How are infographics affecting the distribution of information? Have there been any studies to evaluate the effectiveness of communication through infographics?
2. The authors mention ambient displays several times in the paper. I am not quite sure what an ambient display is. They mention that ambient displays are good tools for quick, peripheral communication. What would be an example of an ambient display?
3. Lam et al, as well as the Icke, both mention “knowledge discovery” in their article. What is knowledge discovery? How is this different than data analysis or data mining? What makes knowledge discovery about knowledge, but not data or information?
1. Page 1520 mentions that “scenarios were derived from a systematic analysis of 850 papers (361 with evaluation) from the information visualization research literature. What kind and how extensive was the evaluation? Also, what’s the reasoning behind picking 361 with evaluations out of 850 to evaluate? Wouldn't that skew the results of this study?
2. The authors say that “their work is closest to a subtype of systematic review know as narrative review, which is a qualitative approach and describes existing literature using narratives without performing quantitative synthesis of study results.” How is the literature review we are writing different from a narrative review? They kind of sound the same so I was wondering if there are any differences between the two?
3. In section 6.1, “Understanding Environments and Work Practices” it says that “in information visualization research studying people and their task processes is still rarely done and only few notable exceptions have published results of these analyses.” Could the Lemieux article, (from 10-17) “Visual analytics, cognition and archival arrangement and description: studying archivists’ cognitive tasks to leverage visual thinking for a sustainable archival future,” be a possible example of this “information visualization research studying people and their task processes?"
1. Seven scenarios are explained based on an extensive literature review. Are these scenarios related to each other? If so, how? Does visualization of a particular scenario apply to any other scenario? What are the constraints?
2. Questions for the CTV scenario pertains to the quality with which information is acquired and the modalities with which people interact with the visualizations. Can we include a question, “How can we assess the quality of information people learnt using a tool? Has the message been conveyed as expected?”
3. Can this scenario approach be used in the field of sonification to identify research areas? Can we design scenarios according to the needs of a sonification system?
1 - In reading through these scenarios, it appears that visualization is a very quantitative science. Which industries are currently using visualization in their workflow, and to what ends? Most of my familiarity with visualization comes from graphic designers using information to create pretty models/pictures, and not necessarily for in-depth data analysis. In looking at TACC's website, it appears that GIS/mapping are making a lot of use of visualization technologies, which makes sense since mapping is a pretty visual field.
2 - Looking at the questions in Section 6, Understanding Work Practices, I'm interested in how Knowledge Management folk could use/employ visualization in their daily work lives. How does one generate visualizations that are specific and useful in the management of information resources, as opposed to interpreting data?
3 - These scenarios also seem applicable to realms outside of visualization, but could really be used in any information science context. It seems vital that we understand our work flows, how our practices help/hinder our users, and attempt to objectively measure efficacy. How can we expand this discussion to say, enhance iSchool curriculum or our own values/desires as information scientists?
1. On page 1525 the authors review evaluation questions for identifying visualization tools. In relation to this, are there any set of questions or established code used to create visualization schemes? 2. Under what systems or circumstances are visualization methods and techniques most effective? 3. In thinking about all of the readings this week, I'm really drawn to the idea of incorporating the senses in processing information. To that end, what role do our senses play in understanding organized information? Have there been any instances or mechanisms developed to incorporate more than one sense in data presentation?
1. Company’s such as Development Seed are creating complex and lightweight API's that can easily be leveraged to create complex maps around datasets (see example here: https://www.mapbox.com/mapbox.js/example/v1.0.0/). Tools such as these seem extremely beneficial for helping users engage with one another and I am curious what other tools like this exist.
2. While visualization seems extremely beneficial for viewing things like large datasets, how could designers put measures in place to ensure that operators could account for errors in data. When compressing so much information there has to be a certain amount of loss in complexity.
3. Not a specific question to the reading but I am interested in any types of software which currently exist that allow users to collaborate over visualized data. I'm curious if the trend in this field is moving toward software which can be run over the web or if more powerful offline software is required.
1. In the section where the authors discuss understanding environments and work practices, it strikes me that these practices could vary quite drastically from place to place. Is the purpose of understanding these evaluation scenarios and creating these questions to allow for a sort of universal approach to decision-making in regards to what you or your work environment wants to achieve with visualization tools?
2. In talking about evaluation of user performance, the authors mention the more commonly used metrics, time and accuracy, but also propose the use of memorability. I’d like to know more about this concept if more is known. How does memorability apply in the case of visualization?
3. I think it’s interesting that the majority of research up until this point has focused on the visualization aspect as opposed to the process. It seems to solidify this idea that we focus quite frequently on the user, rather than those making the system. I wonder if this has changed at all or if we will perpetually be more conscious of the user than of those behind-the-scenes.
1. The authors intend to encourage scholars selecting specific evaluation goals before considering methods. It is a good idea when scholars doing a research that based on guidance from this paper; however, evaluation goals cannot come up first in every case. If I am interested in a visualization method and want to evaluate it and compare it with another method, the evaluation goal might show up later after I dig deeper in this topic.
2. In cases where there are gaps in evaluation approaches, the authors suggest examples from other fields. I wonder can these examples apply to the field of information visualization or not. The existing of some gaps might be caused by ignorance, but the existing of others might have other reasons.
3. There are seven scenarios in this paper. If I am going to do an evaluation about information visualization but my topic has included two of these seven scenarios, what should I do? How can I choose the evaluation goals and evaluation questions? Which methods should I apply to my project? These seven scenarios might overlap each other, and I wonder if there are solutions to deal with this problem.
1) Is there anyone looking at how to visualize three-dimensional data? What challenges would representing three-dimensional data present?
2) Do visualizations ever involve more than one sense? If so, would there be any use to presenting information via senses beyond our aural and visual senses?
3) I understand that data visualization is connected to HCI, but what really connects data visualization to the study of information and the iSchool specifically?
1) I thought it was interesting that, per section 6.1, there haven’t been many studies in the visualization field on how to identify and solve visualization problems that arise in work environments (p. 1524). This seems like a pretty significant oversight, but it’s one that has appeared repeatedly in our readings: a relative lack of study of how information is used in day-to-day scenarios. Why might this be a repeating pattern in the field?
2) I liked the structuring of the scenarios, especially the fact that the authors provided sample evaluation questions for each. While I would have liked a sample application for each scenario as well, this reading was overall much more concrete than many of our readings, and it was easier to see how its findings could be applied to “ground work” in the field.
3) The section on evaluating visualization algorithms was interesting, because the evaluation questions essentially seemed to call for an evaluation that privileged both accuracy (point 2) and interest (point 1). It is an important balance to draw, because it’s easy to envision a scenario in which an “interesting” pattern being visualized was also misleading or incomplete. Especially given the problem of confirmation bias among researchers, how would an evaluator balance the truthfulness of a representation against the desire (and perceptual predisposition) to favor an algorithm that produces an interesting pattern?
1. During the stages of visualization development, three of them are about ‘design’. They are Pre-design, design and redesign, it seems important to have a good design in the development. But how many percentage does it usually take during the development?
2. The authors coded these evaluations into seven scenarios, but they also admitted that possible further coding can reveal news scenarios and questions which they might not have considered here. So what is possible that beyond their conclusion.
3. The authors emphasize the scenarios for understanding visualizations are evaluating user performance, evaluating user experience, which seems VA has close relation to HCI. So what is the overlaps of this two area, and what is the difference.
1. Are the User Experience evaluations for interactive or static visualizations? I would assume for interactive, but that requires more evaluation questions. Is the user able to use and understand the interactions, for example applying filters to the data? Is there feedback from the system to the user, so (s)he knows what is happening at all times?
2. Continuing with the display, if one user applies a custom setting display of the data is there adequate feedback for a new user to understand how the data has been displayed?
3. It is pretty interesting that there are so many more articles on user performance, user experience and evaluating visualization algorithms, given the process of producing a greatly effective User Performance and User Experience. Better processing and analysis can lead to better visualizations for the end user. It does make sense with HCI and USability testing being very popular, but does it still place the cart before the horse in a way?
1. The APT mentioned in the Archives unit seems like it would be a good candidate for this kind of visualization research. Some of the functions of that instrument seemed to be tactile as well as visual, but could these evaluations be applied to that tool?
2. Methodical studies of the visualization of information seem to be relatively new according to this paper. We’ve used visualization for as long as we’ve had information to convey, but examining its effectiveness in this way is more recent, perhaps. Is there some crossover with this kind of study and marketing or advertising studies that examine visual impact?
3. The examination of visualization in collaborative work is interesting, as trying to convey information to a group simultaneously is often a challenge requiring organization of a specific presentation and so on. Has there been more work on real-time updating visualization groupware?
1. To what degree if any has graphic design played in designing tools and the 'rules' for visualization as it were -- that is has design ever come into real play when it comes to designing these basic visualizations or is it something of a tertiary concern?
2. The authors state on pg. 1532 that they kept to a descriptive and not prescriptive approach because they did not want to assign a set type or scenario since each situation being evaluated is different and "requires deep understanding of the evaluation goals and constraints." I don't entirely agree with their stance, because I think they still could have constructed some vague scenarios that possibly 'best fit' under 1 of the 7 with caveats that when doing an evaluation, you may want to consider more than one particular scenario. Researchers sometimes mix qualitative and quantitative research methods without any real issue, so I don't know why they thought they couldn't do something similar to provide a bit more information to guide readers.
3. On pg. 1522 the authors mention a systematic review that found the proportion of papers with evaluations increased over time, but the quality had remained stagnant due to a decreased number of participants, using too many students AS participants, and gender imbalance. It reminded me of a lot of the problems mentioned in gauging relevance -- a lot of that stemmed from there are too many types of people not being represented, and ultimately judging a search's relevancy depended a lot on outside influences. I feel like even though most evaluation methods might have a concrete measurement stick in place there's still too many outside variables that can influence where you end up on that stick. It's kind of like how everyone tells you to try and be the first person interviewing for a job, to make an impression free of comparisons, or the last, because they'll remember you the best.
1. In the beginning part of this article, the authors claimed that the evaluation can occur at different stages of visualization development, such as Pre-design, design, prototype, deployment, and redesign, but they did not give any specific explaination about this relationship when discussing various evaluation methods later. So, I am wondering on which stage(s) could these evaluations be used?
2. There are many menthods having interviews or observations as a way of evaluating. However, how to judge whether the amount of interviewees is not enough or too many?
3. When discussing "Evaluating User Performance", the author suggested that people use controlled experienments to do the evaluation. So, my question is that which factors should be controlled in these experiements? In other words, which aspects could influence the results of user performance experiements?
1. In this article the authors discussed several different scenarios of visualization evaluation and papers that contained tags related to these scenarios. After examining 361 papers from four different venues the authors created a table showing which tags appeared in which venue with which frequency. What does the distribution of types of tags tell us about what these publications specialize in? 2. In this article the authors describe 7 different scenarios of visualization evaluation. These scenarios include understanding work practices, evaluating visual data analysis and reasoning, and several others. While each of these scenarios seems separate do you think that it is possible to have an evaluation method that can be applied to multiple scenarios or should there be different methods for different scenarios? 3. In this article the authors separate the seven different scenarios that they examined into two different groups. These groups were the process group and the visualization group. When examining the number of papers written with goals aimed towards these two groups they found that many more papers were written about the visualization group than were written about the process group. Why do you think that there are more papers written about visualization rather than process? Do you agree with the reasoning that the authors gave that it is related to the fields that visualization grew from?
1. In section 6.6.1 of this paper the authors are talking about the goals of User Experience. Something that they did not mention as a goal, or at least a useful byproduct of User Experience is that often times the feedback that users give can contain an idea that the designers had not previously though of before. In my limited experience, every user test that I've done has had this happen at least once and has helped improve my design.
2. Field Observation of User Experience can often times be the most useful and provide the most practical information about a system. By observing what users do when they are using the system in their reality, the designers can gain information that often times the users themselves don't even think about when asked for that type of information.
3. I appreciate that the authors did not map the scenarios and methods one-to-one. While I think that there are obvious methods for particular scenarios, I also think that it's very possible that by using different methods the allow for the possibility that they can enlighten researchers to information that they may not have considered.
1. ‘The researchers found that while the proportion of papers with evaluations increased over time, the quality of the evaluation may not have improved.’ Why does the author say that? The increase of proportion means that the researchers value the papers with evaluations. It doesn’t make sense that the quality may not have improved. 2. In table 3, we can see that there are less papers in process than in visualization. It seems that the visualization is more important than process. We also can see that in figure 1. So what are the reasons for this phenomenon?
3. In section 5.3, when talking about developing scenarios, the authors say that they removed two tags. Is that ok? I think scientific papers should be stricter.
1. The authors fault previous studies for lack of gender balance. Obviously gender influences many things. Have any further studies been done with more of a gender balance. In what ways would gender influence evaluation of visualization systems?
2. This article as well as several others mention open-coding. What exactly is open-coding? Are there other types of coding? Why is it used?
3. Fig. 2 shows a vast increase in evaluations geared toward the user since 1995. Is this related to the increase in personal electronic devices and commercialization of visualization for personal users?
1. On pg. 1523 of the article, the authors say that they 'decided to take a descriptive rather than a prescriptive approach' to writing their paper. What are the values of a descriptive rather than prescriptive approach? If you have done the research, why not offer possible solutions with a prescriptive approach?
ReplyDelete2. What exactly constitutes a visualization? Is it simply any information or data set that is presented for visual assessment? What sorts of projects would benefit from visualization vs. sonification and what would those data sets look like?
3. I would be curious to see the research on how people who have developed hyper-senses employ visualization and sonification tools. Is their aptitude for the use and learning of such programs dramatically higher than others? What is the application of such programs for people who are blind or deaf/hearing impaired?
1. The authors based their study on "seven evaluation scenarios most often encountered by visualization researchers." As someone completely new to this topic, I'd like to know a bit more about how these were determined, what other omitted scenarios were, and how they play out in reality.
ReplyDelete2. Following on my first question, how might these scenarios be different - and thus the paper's findings be different - with a different sample set? They used EuroVis, InfoVis, IVS, and VAST (page 1523).
3. I don't really have a question about this, per se, but just in an attempt to think about this study in the framework(s) we've been learning about in class this semester, I'm interested in discussing their original coding tags (table 3) alongside the DIKW hierarchy, information life cycle, and various knowledge management flows. This is really the only way "in" to this topic for me, as it's a bit foreign thus far.
1. When talking about the scope of evaluation, the author outlines five stages in visualization development. It seems like there is only one iteration of visualization development, which from the predesign to redesign. Shouldn’t it be multiple iterations from a user centered design perspective?
ReplyDelete2. In the section of "UWP: Evaluation Questions”, the author lists "In which daily activities should the visualization too be integrated? “ as one of evaluation question. I wonder what the significance of the connection with daily activities is? And why should visualization be integrated into these kind of activities? Shouldn’t the visualization be mainly responsible for capturing the activities?
3. For the methods of evaluation for different scenarios, the author talks about different techniques, which include field observation, interview, laboratory observation and etc. Would online questionnaire, survey and online remote testing, especially using crowdsourcing tools, be also a good method for evaluating the visualization?
1. In regard with Sheelagh Carpendale, one of the authors in this paper, I read a paper that she wrote as a joint author on last semester. In this paper, she described a new device by which a user can analyze the data in a spreadsheet on the screen of the device by intuitive gestures such as sketching columns and coordinates. Based upon the ways of classifying information visualization, what type of the scenario the device belongs to?
ReplyDelete2. In 6.6.3, the authors raised four ways to evaluate user experience and they are "Informal evaluation", "Usability test", "Field observation" and "Laboratory questionnaire". However, it seemed to me that they left something behind. In an Internet company, the most common way to reflect the degree of user satisfaction about the website is to build dashboards and monitor the metrics defined previously. This action seemed not to belong to any methods above, right?
3. In the conclusion, the authors mentioned "we coded these evaluations according to 17 tags (table. 3)". While in table #3 in the paper, there are two tags are not included in any scenarios. Does it mean that the completeness of the classification of information visualization is not fully achieved?
3.
1. Regarding UWP evaluation scenarios, the authors state that, “observational studies sometimes occur in laboratory settings in order to allow for more control of the study situation” (1525). While it’s true that conducting studies in a lab setting would provide more control, does it provide more accuracy? UWP relates to environment/work usability – would a lab setting really provide the right environment to evaluate it?
ReplyDelete2. Based on the authors’ study, it seems that the evaluation scenarios are user-heavy. In the “Trends in Evaluation” section of their study, 67% of the 85% that made up the three “main” evaluation scenarios involved users. Can we expect evaluation methods to continue to be so involved with users? Will they change as visualization develops?
3. The authors repeatedly stipulate that their study is “descriptive rather than prescriptive” (1532). Though they did well in their goal of describing the different evaluation scenarios, how effective are these scenarios? Are lab studies more effective than field studies? What can we expect evaluation scenarios to look like in the future?
In the UWP scenario, Lam et al. mention a lack of research focused on studying people and their task processes is currently lacking. Does the almost implicit nature of this step prevent any indepth study from occurring? Particularly in an individual setting, since by its very nature groups would have more explicit aspects of interaction available for research in regards to visualization.
ReplyDeleteThe questions posed under the CTV section present some interesting avenues for exploration. Does the third question, “Is the tool helpful in explaining and communicating concepts to third parties?”(1527), need to be present for all visualization tools or does that particular aspect only need to be dealt with in certain instances?
In the User Experience section, the informal evaluation mentions allowing people to play with a system. How exactly is this form of evaluation different from a usability test and field observation method? It seems like the informal evaluation mimics a lot of both of these methods.
1. In this paper, I notice that most of ways to present information are two-dimensional. I'm wondering how would three or more dimensional graphs help information visualization?
ReplyDelete2. In the all VA discussed in the paper, I didn't see any HCI elements appearing with the graphs. I'm surprised because I thought directly and intuitively manipulating the diagrams to analyze the data is the future of VA. Did the authors only keep their eyes on current approaches of VA?
3. There are several visualization types are classified in the paper: Describing Data, Viewing Relationships, Picturing Data and so on. I think the types are determined both by user needs and technical constraints. In this sense, with the development of information tech, what kinds of new types will emerge in the near future?
1 The author believes that the final decision on appropriate methods should be made on case-by-case basis. Should not we analysis the scenarios based on different situation and need as well? And is there any probability that we may need to combine those scenarios?
ReplyDelete2 In the introduction, the author mentions to categorize evaluation scenarios into those for understanding data analysis processes and those which evaluate visualizations themselves. And the scenarios for understanding data analysis are UWP, VDAR, CTV and CDA. The scenarios for understanding visualizations are UP, UE, and VA. While, on page 1524, the author says that they would describe the four visualization scenarios in Sections UWP, VDAR, CTV and CDA, followed by the three process scenarios in Sections UP, UE, and VA. This inconsequent is confusing. Which is process or visualization part?
3 How does formers researchers concludes these seven scenarios? Are there any more scenarios we should take consideration of?
1. I am wondering whether info graphics might count as a type of visualization. Info graphics usually contain written facts or numbers, but with graphs or other forms of visual analytics to display the information. I would say that generally the displays are made to be more visually appealing to get a point across over perhaps more accurately proportional displays. Infographics have become very popular recently on social media sites. How are infographics affecting the distribution of information? Have there been any studies to evaluate the effectiveness of communication through infographics?
ReplyDelete2. The authors mention ambient displays several times in the paper. I am not quite sure what an ambient display is. They mention that ambient displays are good tools for quick, peripheral communication. What would be an example of an ambient display?
3. Lam et al, as well as the Icke, both mention “knowledge discovery” in their article. What is knowledge discovery? How is this different than data analysis or data mining? What makes knowledge discovery about knowledge, but not data or information?
1. Page 1520 mentions that “scenarios were derived from a systematic analysis of 850 papers (361 with evaluation) from the information visualization research literature. What kind and how extensive was the evaluation? Also, what’s the reasoning behind picking 361 with evaluations out of 850 to evaluate? Wouldn't that skew the results of this study?
ReplyDelete2. The authors say that “their work is closest to a subtype of systematic review know as narrative review, which is a qualitative approach and describes existing literature using narratives without performing quantitative synthesis of study results.” How is the literature review we are writing different from a narrative review? They kind of sound the same so I was wondering if there are any differences between the two?
3. In section 6.1, “Understanding Environments and Work Practices” it says that “in information visualization research studying people and their task processes is still rarely done and only few notable exceptions have published results of these analyses.” Could the Lemieux article, (from 10-17) “Visual analytics, cognition and archival arrangement and description: studying archivists’ cognitive tasks to leverage visual thinking for a sustainable archival future,” be a possible example of this “information visualization research studying people and their task processes?"
1. Seven scenarios are explained based on an extensive literature review. Are these scenarios related to each other? If so, how? Does visualization of a particular scenario apply to any other scenario? What are the constraints?
ReplyDelete2. Questions for the CTV scenario pertains to the quality with which information is acquired and the modalities with which people interact with the visualizations. Can we include a question, “How can we assess the quality of information people learnt using a tool? Has the message been conveyed as expected?”
3. Can this scenario approach be used in the field of sonification to identify research areas? Can we design scenarios according to the needs of a sonification system?
1 - In reading through these scenarios, it appears that visualization is a very quantitative science. Which industries are currently using visualization in their workflow, and to what ends? Most of my familiarity with visualization comes from graphic designers using information to create pretty models/pictures, and not necessarily for in-depth data analysis. In looking at TACC's website, it appears that GIS/mapping are making a lot of use of visualization technologies, which makes sense since mapping is a pretty visual field.
ReplyDelete2 - Looking at the questions in Section 6, Understanding Work Practices, I'm interested in how Knowledge Management folk could use/employ visualization in their daily work lives. How does one generate visualizations that are specific and useful in the management of information resources, as opposed to interpreting data?
3 - These scenarios also seem applicable to realms outside of visualization, but could really be used in any information science context. It seems vital that we understand our work flows, how our practices help/hinder our users, and attempt to objectively measure efficacy. How can we expand this discussion to say, enhance iSchool curriculum or our own values/desires as information scientists?
1. On page 1525 the authors review evaluation questions for identifying visualization tools. In relation to this, are there any set of questions or established code used to create visualization schemes?
ReplyDelete2. Under what systems or circumstances are visualization methods and techniques most effective?
3. In thinking about all of the readings this week, I'm really drawn to the idea of incorporating the senses in processing information. To that end, what role do our senses play in understanding organized information? Have there been any instances or mechanisms developed to incorporate more than one sense in data presentation?
1. Company’s such as Development Seed are creating complex and lightweight API's that can easily be leveraged to create complex maps around datasets (see example here: https://www.mapbox.com/mapbox.js/example/v1.0.0/). Tools such as these seem extremely beneficial for helping users engage with one another and I am curious what other tools like this exist.
ReplyDelete2. While visualization seems extremely beneficial for viewing things like large datasets, how could designers put measures in place to ensure that operators could account for errors in data. When compressing so much information there has to be a certain amount of loss in complexity.
3. Not a specific question to the reading but I am interested in any types of software which currently exist that allow users to collaborate over visualized data. I'm curious if the trend in this field is moving toward software which can be run over the web or if more powerful offline software is required.
1. In the section where the authors discuss understanding environments and work practices, it strikes me that these practices could vary quite drastically from place to place. Is the purpose of understanding these evaluation scenarios and creating these questions to allow for a sort of universal approach to decision-making in regards to what you or your work environment wants to achieve with visualization tools?
ReplyDelete2. In talking about evaluation of user performance, the authors mention the more commonly used metrics, time and accuracy, but also propose the use of memorability. I’d like to know more about this concept if more is known. How does memorability apply in the case of visualization?
3. I think it’s interesting that the majority of research up until this point has focused on the visualization aspect as opposed to the process. It seems to solidify this idea that we focus quite frequently on the user, rather than those making the system. I wonder if this has changed at all or if we will perpetually be more conscious of the user than of those behind-the-scenes.
1. The authors intend to encourage scholars selecting specific evaluation goals before considering methods. It is a good idea when scholars doing a research that based on guidance from this paper; however, evaluation goals cannot come up first in every case. If I am interested in a visualization method and want to evaluate it and compare it with another method, the evaluation goal might show up later after I dig deeper in this topic.
ReplyDelete2. In cases where there are gaps in evaluation approaches, the authors suggest examples from other fields. I wonder can these examples apply to the field of information visualization or not. The existing of some gaps might be caused by ignorance, but the existing of others might have other reasons.
3. There are seven scenarios in this paper. If I am going to do an evaluation about information visualization but my topic has included two of these seven scenarios, what should I do? How can I choose the evaluation goals and evaluation questions? Which methods should I apply to my project? These seven scenarios might overlap each other, and I wonder if there are solutions to deal with this problem.
1) Is there anyone looking at how to visualize three-dimensional data? What challenges would representing three-dimensional data present?
ReplyDelete2) Do visualizations ever involve more than one sense? If so, would there be any use to presenting information via senses beyond our aural and visual senses?
3) I understand that data visualization is connected to HCI, but what really connects data visualization to the study of information and the iSchool specifically?
1) I thought it was interesting that, per section 6.1, there haven’t been many studies in the visualization field on how to identify and solve visualization problems that arise in work environments (p. 1524). This seems like a pretty significant oversight, but it’s one that has appeared repeatedly in our readings: a relative lack of study of how information is used in day-to-day scenarios. Why might this be a repeating pattern in the field?
ReplyDelete2) I liked the structuring of the scenarios, especially the fact that the authors provided sample evaluation questions for each. While I would have liked a sample application for each scenario as well, this reading was overall much more concrete than many of our readings, and it was easier to see how its findings could be applied to “ground work” in the field.
3) The section on evaluating visualization algorithms was interesting, because the evaluation questions essentially seemed to call for an evaluation that privileged both accuracy (point 2) and interest (point 1). It is an important balance to draw, because it’s easy to envision a scenario in which an “interesting” pattern being visualized was also misleading or incomplete. Especially given the problem of confirmation bias among researchers, how would an evaluator balance the truthfulness of a representation against the desire (and perceptual predisposition) to favor an algorithm that produces an interesting pattern?
1. During the stages of visualization development, three of them are about ‘design’. They are Pre-design, design and redesign, it seems important to have a good design in the development. But how many percentage does it usually take during the development?
ReplyDelete2. The authors coded these evaluations into seven scenarios, but they also admitted that possible further coding can reveal news scenarios and questions which they might not have considered here. So what is possible that beyond their conclusion.
3. The authors emphasize the scenarios for understanding visualizations are evaluating user performance, evaluating user experience, which seems VA has close relation to HCI. So what is the overlaps of this two area, and what is the difference.
1. Are the User Experience evaluations for interactive or static visualizations? I would assume for interactive, but that requires more evaluation questions. Is the user able to use and understand the interactions, for example applying filters to the data? Is there feedback from the system to the user, so (s)he knows what is happening at all times?
ReplyDelete2. Continuing with the display, if one user applies a custom setting display of the data is there adequate feedback for a new user to understand how the data has been displayed?
3. It is pretty interesting that there are so many more articles on user performance, user experience and evaluating visualization algorithms, given the process of producing a greatly effective User Performance and User Experience. Better processing and analysis can lead to better visualizations for the end user. It does make sense with HCI and USability testing being very popular, but does it still place the cart before the horse in a way?
1. The APT mentioned in the Archives unit seems like it would be a good candidate for this kind of visualization research. Some of the functions of that instrument seemed to be tactile as well as visual, but could these evaluations be applied to that tool?
ReplyDelete2. Methodical studies of the visualization of information seem to be relatively new according to this paper. We’ve used visualization for as long as we’ve had information to convey, but examining its effectiveness in this way is more recent, perhaps. Is there some crossover with this kind of study and marketing or advertising studies that examine visual impact?
3. The examination of visualization in collaborative work is interesting, as trying to convey information to a group simultaneously is often a challenge requiring organization of a specific presentation and so on. Has there been more work on real-time updating visualization groupware?
1. To what degree if any has graphic design played in designing tools and the 'rules' for visualization as it were -- that is has design ever come into real play when it comes to designing these basic visualizations or is it something of a tertiary concern?
ReplyDelete2. The authors state on pg. 1532 that they kept to a descriptive and not prescriptive approach because they did not want to assign a set type or scenario since each situation being evaluated is different and "requires deep understanding of the evaluation goals and constraints." I don't entirely agree with their stance, because I think they still could have constructed some vague scenarios that possibly 'best fit' under 1 of the 7 with caveats that when doing an evaluation, you may want to consider more than one particular scenario. Researchers sometimes mix qualitative and quantitative research methods without any real issue, so I don't know why they thought they couldn't do something similar to provide a bit more information to guide readers.
3. On pg. 1522 the authors mention a systematic review that found the proportion of papers with evaluations increased over time, but the quality had remained stagnant due to a decreased number of participants, using too many students AS participants, and gender imbalance. It reminded me of a lot of the problems mentioned in gauging relevance -- a lot of that stemmed from there are too many types of people not being represented, and ultimately judging a search's relevancy depended a lot on outside influences. I feel like even though most evaluation methods might have a concrete measurement stick in place there's still too many outside variables that can influence where you end up on that stick. It's kind of like how everyone tells you to try and be the first person interviewing for a job, to make an impression free of comparisons, or the last, because they'll remember you the best.
1. In the beginning part of this article, the authors claimed that the evaluation can occur at different stages of visualization development, such as Pre-design, design, prototype, deployment, and redesign, but they did not give any specific explaination about this relationship when discussing various evaluation methods later. So, I am wondering on which stage(s) could these evaluations be used?
ReplyDelete2. There are many menthods having interviews or observations as a way of evaluating. However, how to judge whether the amount of interviewees is not enough or too many?
3. When discussing "Evaluating User Performance", the author suggested that people use controlled experienments to do the evaluation. So, my question is that which factors should be controlled in these experiements? In other words, which aspects could influence the results of user performance experiements?
1. In this article the authors discussed several different scenarios of visualization evaluation and papers that contained tags related to these scenarios. After examining 361 papers from four different venues the authors created a table showing which tags appeared in which venue with which frequency. What does the distribution of types of tags tell us about what these publications specialize in?
ReplyDelete2. In this article the authors describe 7 different scenarios of visualization evaluation. These scenarios include understanding work practices, evaluating visual data analysis and reasoning, and several others. While each of these scenarios seems separate do you think that it is possible to have an evaluation method that can be applied to multiple scenarios or should there be different methods for different scenarios?
3. In this article the authors separate the seven different scenarios that they examined into two different groups. These groups were the process group and the visualization group. When examining the number of papers written with goals aimed towards these two groups they found that many more papers were written about the visualization group than were written about the process group. Why do you think that there are more papers written about visualization rather than process? Do you agree with the reasoning that the authors gave that it is related to the fields that visualization grew from?
1. In section 6.6.1 of this paper the authors are talking about the goals of User Experience. Something that they did not mention as a goal, or at least a useful byproduct of User Experience is that often times the feedback that users give can contain an idea that the designers had not previously though of before. In my limited experience, every user test that I've done has had this happen at least once and has helped improve my design.
ReplyDelete2. Field Observation of User Experience can often times be the most useful and provide the most practical information about a system. By observing what users do when they are using the system in their reality, the designers can gain information that often times the users themselves don't even think about when asked for that type of information.
3. I appreciate that the authors did not map the scenarios and methods one-to-one. While I think that there are obvious methods for particular scenarios, I also think that it's very possible that by using different methods the allow for the possibility that they can enlighten researchers to information that they may not have considered.
1. ‘The researchers found that while the proportion of papers with evaluations increased over time, the quality of the evaluation may not have improved.’ Why does the author say that? The increase of proportion means that the researchers value the papers with evaluations. It doesn’t make sense that the quality may not have improved.
ReplyDelete2. In table 3, we can see that there are less papers in process than in visualization. It seems that the visualization is more important than process. We also can see that in figure 1. So what are the reasons for this phenomenon?
3. In section 5.3, when talking about developing scenarios, the authors say that they removed two tags. Is that ok? I think scientific papers should be stricter.
1. The authors fault previous studies for lack of gender balance. Obviously gender influences many things. Have any further studies been done with more of a gender balance. In what ways would gender influence evaluation of visualization systems?
ReplyDelete2. This article as well as several others mention open-coding. What exactly is open-coding? Are there other types of coding? Why is it used?
3. Fig. 2 shows a vast increase in evaluations geared toward the user since 1995. Is this related to the increase in personal electronic devices and commercialization of visualization for personal users?