10-17 Lemieux, Victoria L. Visual analytics, cognition and archival arrangement and description: studying archivists’ cognitive tasks to leverage visual thinking for a sustainable archival future.
1 - Is there a way to quantitatively assess arrangement and description? I'm primarily thinking of a way to understand or evaluate how 'useful' or 'accesible' an arrangement schema is - while the article focused on arrangement behavior, we have no real understanding of what the resulting arrangement was. If we can't come up with a 'good' or 'correct' way to arrange materials, how can we develop tools to help us arrange documents in the first place?
2 - I'm curious about all of the variables present in attempting to assess arrangement behavior. Lemiuex acknowledges the limitations of her study of only two individuals, but I would imaging there's an incredible scope of variables to control for, making studies of arrangement challenging. The type of material, archivists' experience, training and general understanding of their identity/role as archivists, subject specialties amongst professionals...have there been comprehensive studies done to assess arrangement behavior? And why, after years of archiving, have we just begun to discuss arrangement in these terms?
1. The author mentions that "There was not a strong focus on any specific template or archival style throughout the arrangement. Both subjects followed their own method without significant difficulties, and were able to integrate their work with one another." But if each archivist processes the collections differently with his/ her own style, won’t this affect the neutrality of the processing? While acknowledging that humans are naturally biased, should’t we establish standards to alleviate the bias?
2. The author’s research follows the principles of human centered design. The design process is very sounded. But across the whole research, there were only two subjects. The author acknowledges this limitation. But the limitation I think is more than the limited number of the subjects, but also the particular demographics of these two subjects. Both of them are recent graduates, which may also invalidate the findings.
3. The author says “ While the observations were done only on physical archives as opposed to digital ones, relevant information can still be drawn out and applied to the design of a VA tool for archivists to use in working with digital archives.” I doubt the validation of this claim. Different media requires different sets of interaction, because the users would have different mental models when processing digital collections and physical collections. Though the author can borrow the metaphor of processing physical collections, his design still consider the different mind set of interacting with digital collections before the evaluation.
1. At one point, Lemieux points out that the two archivists used in the experiment have somewhat different approaches to archival processing due to their own, personal experiences. Many definitions of knowledge used in describing the DIK hierarchy use experience as a defining factor of knowledge. Having also cited the DIK hierarchy in her work, would it stand to reason that the different approaches are a result of different levels of knowledge?
2. In the experiment with the two archivists, they both began to recognize certain grant documents as they assessed the collection. This visual recognition led to shorter periods of time spent determining what was in the collection, as they could now almost immediately ascertain what was a grant and what was something else. As the author described two different 'loops' that archivists are cognitively fed through, would this recognition of grant documents fall into the foraging loop or the sense-making loop? Is there overlap between the two? Does one feed into the other?
3. The visual analytics model is based on identifying how archivists think. An understanding of where archivists come from, how they think, etc. is integral to creating a useful visualization process. But considering that every person is different and has various experiences, is this the right approach to creating an effective model? Or is that the majority of people will end up favoring this tool, while others will just disregard it, because they don't fit into the generalized category?
1 There's a study cited by Pirolli and Card saying that experts do not simply review the data and match them to patterns directly from memory, instead, they select the relevant information and encode it in special representations. Would this cause a mess in the field of archive if not a certain rule regulate how to arrange and describe, by archivist with different background and education?
2 It is concluded in the paper that visualization and VA hold much promise as tools to support a sustainable future for archives, particularly as cognitive aid to archivists in undertaking complex analytic tasks, like arrangement and description, which cannot yet fully be achieved using computational methods. And it is also stated in the article that VA is visualization plus computational methods. But what is the difference between them? How does this analysis or computational method really work compared with visualization?
3 I also have a question about related literature. The author provides a large amount of related literature in this paper. Thinking about our project, I am thinking how should we better organize the related literature? The author in this article seems like arrange these references in a certain way that finally guiding us to his topic. But among this process, I see some contents are not that related to the topic. How we decide what to use into our related literature? And would it be better if author put more detail or description of relation between related literature and the topic to let the readers have a clear clue about the process of how this paper developed?
1. On page 3 Lemieux states "have found that establishing, maintaining, and monitoring forums for user engagement are themselves very demanding tasks, especially if quality control is to be achieved" in regards to using volunteers to help with collection item descriptions. But does the archivist his or herself need to be the person managing the forum? Could that be done by another employee with occasional consulting from the archivist? This would free up the archivist to continue with needed work. It would cost less than two archivists salaries while simultaneously doubling up the work performance.
2. Are there best practices for business to set up meta data information at the creation and revisions (life) of a document? Would everything that needs to be recorded be logged in MS Office products? If not archivists should work with the major software companies that a majority of users utilize to create documents (Microsoft, Adobe, etc) to get these features and information recorded into the history of the document. Perhaps automating the process as much as possible and requiring the user to enter information?
3. "This will provide us with data about the spatial regions that should be given greater weight in the design of our clustering algorithm" p. This seems that it would have to be very particular to the type of document. You have many spacial arrangements for identifying the different document types, so I'm sure that many different types of documents would overlap. The archivist would thus have to eliminate certain types of documents that the software would match against prior to using. For example, the grant application might be similar to another form type but is not applicable to this collection.
4. It seems that a skill set of archivists is visual thinking. To be able to recognize a document by what it looks like, where it is located, etc. visually. Can we also learning something from studying visual thinkers that may help archivists and the further explore the tools that they need?
4 - I like your point about visual thinking. Something that's addressed in AEI is the idea that we can recognize certain types of documents without needing to read the entire page (for example, letters have a very specific format) and where we should look for 'clues' about authorship, such as a secretary's initials in the corner that might allow us to connect the pages to a specific creator. It would be interesting to see where we could take these skills both in and out of the technology sphere. I think the proposed study of eye-tracking that Lemieux brings up would be an excellent source of information for how archivists are 'reading' collections.
1. On page 2, Evans suggests "that instead of constructing detailed descriptions themselves, archivists should rely on web-based user contributions, establishing systems that use ‘‘the eyeballs and the intellect of… volunteers…throughout the world." This is a terrible idea for so many reasons. If these volunteers have no archival experience, wouldn't they be a hindrance and not a help? Some serious standards would have to be implemented if volunteers were to contribute. Volunteers help would probably create extra work and lost time because it would need to be checked by whoever is in charge of the department. I guess it varies by institutions, but are volunteers just creating extra work for institutions? I guess also though in many cases archives and museums are happy to get any help they can.
2. The whole process of writing down the two archivists thoughts during their work was interesting, but I can't help wonder if this data is skewed? What information or speech were the researchers leaving out that they thought was unnecessary? I know I babble/ramble on to myself when working on things and I guess it's part of the research/though process, but I don't think it would be valuable/coherent enough to use to come to conclusions about research with. It also says sessions were conducted in the participants’ normal archival location, so as to achieve a more naturalistic recording of the methods involved in archival processing." I'm guessing the speech or academic/professional language might vary if the two archivists were interviewed in their own home.
3. On page 17, it says the observations were done only on physical archives as opposed to digital ones, relevant information can still be drawn out and applied to the design of a VA tool for archivists to use in working with digital archives." How so? Physical and digital archives are totally different things. Shouldn't there be a study done on the differences between research approaches to digital and physical?
1. In the part of methodology, two archivists participated in the study. Then, the research team observed the archival processing and summarized it into a three-stage sense-making process. However, I think the observing from only two participants is not reliable enough. Maybe these two archivists graduated from the same school so that they have too much similarity, but other archivists have different archival processing.
2. Since the observation was done only on physical archives, why was the author so sure that the VA tool they design could be used in working with digital archives? The file type is the most obvious attribute when they design their VA tools; however, the file type in the computer is totally different from the physical file type. In a word, I think they have a very weak and unreliable foundation to build the VA tool.
3. The author mentioned that the resulting VA tool would be used as a test bed once the prototype was developed. It is reasonable to yield additional insight and test how well the solution works in the built environment; however, I wonder how to evaluate it and make sure the prototype is good enough. They wanted to use these insights to refine the model; I wonder how the next step would work, maybe with a large group of study participants.
1. With so much emphasis being placed on the expectation that metadata will be applied at the beginning of the digital lifecycle for VA systems like this to work I find it very surprising that we have chosen PDF to distribute so many documents in the 21st century. In order for us to start building better metadata it seems there should be some sort of intervention between file format creators and the archives world. Should we be building better finding aides into our documents at the start?
2. There seems to be an interesting connection between the digital processing table in the last article and VA systems in that they both attempt to help aid in optimal arrangement scenarios for the archivist. It seems that context plays a huge role in how an archive is arranged and for this reason I am wondering if visual systems might be useful in helping to organize archives specifically for the interests of the viewer as opposed to trying to figure out a one size fits all arrangement scenario.
3. It seems that recent research into visual mapping could be useful in helping to create an algorithm that sorts through which areas of documents are most useful to archivists. It would be interesting to actually see a model of the areas that a trained archivist looked at when assessing documents as opposed to preforming research on a small group of new archivists.
1. Short visual cues can certainly be of use in locating objects among an organized or semi-organized grouping. Doctor’s offices have used files with colored tabs for the first letters of a patient’s last name for decades. Are such visual shorthand methods what the authors of this paper are proposing for archival groupings?
2. Lemieux notes that visualization thus far has been directed at end users rather than assisting archivists in their initial work. This seems like a useful tool to aid both, though the tactile assistance gained from a physical object is lost with digitized versions. How might digital visual tagging be utilized to try and recreate that immediate identification?
3. Given the large number of born-digital and already-digitized archives that exist and continue to form, how can we move backwards to add these visual identifiers to pre-existing archives? The author suggests creation of a complex algorithm based upon usage studies of several archivist pairs but such a system would have to automatically go through sometimes literally millions of documents without error and would be expensive to develop. How much can this concept of visual identification truly be automated?
1. Currently in the era of big data, the traditional methods in archive field cannot work. But why does the author believe VA or visualization is a good approach to solve this problem? I haven’t studied anything about VA, but I’m interested in that. I doubt whether VA will solve the problem essentially.
2. ‘The application of visualization and VA in the archival domain primarily has relied upon using archival descriptive metadata’ (p10). So why does the VA in the archival domain only focus on descriptive metadata? Is there anything else in the archival domain that can apply VA and visualization? What are they and what’s the difficulty when doing that?
3. When talking about the limitations, the author says that ‘the study used physical archives, and it is still unknown to what extent the processing of entirely digital collections changes the analytic process’ (p21). I think digital archive is the mainstream currently. So I wonder why the author doesn’t use digital archives to do the research. Is that difficult?
1) Because I don’t have much background in archives jargon, I was not clear on what Lemieux meant by “fonds,” and so that section was a bit confusing for me. Can we discuss what a “fond” is, and what the author means by “fonds versus record group and fonds versus series-based arrangement and description” (page 5)?
2) I find it interesting that archivists’ decision-making process has largely gone unexamined in the literature. Especially after reading the Cook article, Lemieux’s observation seems to imply that the archival field has only recently ceased to take their decision-making processes for granted. This is very striking to me, as humanities fields have been (somewhat exhaustively) examining the question of how decision-making systems regarding history are installed and reinforced for a very long time now. Given the close ties between archival studies and disciplines like history, why have archivists only recently begun to conduct empirical examinations of their decision-making processes?
3) It’s fascinating how much difficulty empirical researchers seem to have nailing down the procedures and methods used by archivists, or by information professionals in general—in other words, their tacit knowledge! It seems that it’s difficult to systematically analyze tacit human reasoning, and that it is thus very difficult to synthesize or standardize it. How can this obstacle be overcome?
The simplest way I can think to explain this is fonds are what we call collections or record groups. While in the US and Canada we use collection to indicate the entire aggregation of documents, fonds is what gets used in France, the UK, and other European countries. Here, Wiki can help with this: http://en.wikipedia.org/wiki/Fonds
The big sort of point of contention is that in Europe they have 'respect des fonds' which in a nutshell means they don't do a lot of work rearranging files in folders and changing the original order materials came in to the archives under, whereas depending on the archives here in the U.S. our point of view can be distilled as 'I do what I want' a lot. If you like I can send you a copy of some of my slides and notes from Archival Enterprise I because Ciaran knows a lot about this (and tries ever so hard to make us understand it).
1. This article introduced a new term which so far we have not discussed. What exactly are "archival fonds" and how are they used? Do they fall under description or organization?
2. The author describes some very exciting new ideas for technological advances, for example computer algorithms for spatial layouts to help identify document types. However, the archival field has been greatly impacted by economic setbacks. Where do the proponents of these new technologies expect to find funding for these technological advances?
3. Meehan described two steps in the analytic process along with two stages -- top down and bottom up, but in Lemieux's study those steps and stages were not used by the subjects. Were those steps and stages just suggested practice by Meehan or had they been put into practice sometime previously?
1. We have read so many articles that emphasize how quickly the information we handle today is exploding, including this one. We can deny it, but have the authors considered that how much of it can be processed or filtered by computer without intervention from human? Is the fact really as serious as the number is talking?
2. Visual analytic can display the information from graphic or more direct way, which is reasonably a efficient way for archival arrangement and description. But what we could do to keep the accuracy of the research?
3. It is interesting that the researches at UT Austin had developed a prototype VA system to aid archivists in identifying files requiring digital preservation and as a possible VA tool. Is it the APT system mentioned in other reading? If not, could the APT system be regarded as a VA tool?
On page 5, Lemieux mentions that “authors do not make clear how archivists actually reason about their concrete application of the concepts when arranging and describing different fonds.” Does this lack of explanation stem from an assumption as to the overall process undertaken by archivists as they go about processing archives?
Given how human beings interact with the world, it makes sense that interacting with archival documents in the form of refoldering/rehousing helps archivists gather quite a bit of information about each document. Do the authors have a time measurement for this sort of visual and tactile interaction with the documents? And, if so, what is the optimal visual time spent on a document to gather enough information for creating a mental model?
The mention of GUIs and TUIs brings to mind the use of a tool such as the APT(Augmented Processing Table). Would incorporating a system such as the oculus rift, perhaps in the initial processing phase or beyond, and a device like the APT provide a tandem which would speed up the archival process? By having the documents/media etc. in a three-dimensional world and allowing an archivist to essentially “walk” through all the documents that need to be archived it could allow the archivists another level of visualization.
1. The author talks quite a bit about the form of a document and its ability to aid archivists in inferring contextual information. He goes on to suggest, or put forth, the idea of grouping by form. He gives the example of grant applications. I may be misunderstanding exactly what he means by form, but don’t archivists typically want to steer clear of arrangement based on document form? Maybe someone could help clarify how form aids in contextualization.
2. Although the author recognizes limitations of his approach to the research and addresses them within the paper, it seemed, even so, that one team of archivists processing one collection could not possibly inform how visual analytics should be incorporated into archival processes, especially in translating from a physical to a digital environment. Regardless of the context for research, do you think the findings assist in making well-informed and thoughtful deductions in regards to next steps and future needs?
3. Towards the end of the article, the idea of a system which replicates materiality is put forth. The author suggests that materiality is a cognitive aid, but is unsure of the form in which this might be achieved as a feature of a VA tool. First off, I’m not entirely sure what is meant by materiality, as this term has such a different meaning within a digital environment. How can this concept of materiality be described or made explicit?
re: 3. I felt curious about the concept of materiality as well. For instance, how do you convey weight, smell, texture, etc? Do you even need to record those characteristics?
1. "Another approach was suggested by Evans, that instead of constructing detailed descriptions themselves, archivists should rely on web-based user contributions, establishing systems that use 'the eyeballs and the intellect of… volunteers… throughout the world'" I really like this idea but I find it hard to believe that a project like this wouldn't come under some kind of attack by the Internet. Perhaps this is just me being a pessimist and I by no means think that this wouldn't be productive at all but I'm not sure I would trust somewhat anonymous volunteers to provide archival levels of integrity. Surely there would be some kind of approval process for information submitted this way but I also think that when faced with tons of information to verify, the human part of the process has a reasonable chance of failing.
2. This article talks a lot about the process of archiving. The author mentions that "...the archivist gathers contextual information and...the(n) (the) archivist uses contextual information to generate an understanding of the various contexts of the records." This seems like it would be the standard approach to archiving. Are there any times where a different approach would be used?
3. "Visualization and VA hold much promise as tools to support a sustainable future for archives, particularly as cognitive aid to archivists in undertaking complex analytic tasks..." I heartily agree with this conclusion. I believe that the biggest hurtle to achieving this is by providing at a minimum, a similar level of functionality that an archivist can achieve on their own and improved, intuitive interactions that allow archivists to utilize the functionality provided in swift and easy manner.
1. I would really be interested in knowing whether or not there was a follow-up study to the one done in this article. I know, for me at least when I first learned processing, that the more times I repeated the action (the more collections or boxes I processed), the quicker I became at figuring out what I need to do. I would like to see them expand this -- maybe aside from just recently graduated archivists they could also sample people with varying degrees of experience in the field, or perhaps different areas of expertise?
2. I also feel this article does not take into account the life cycle like nature of archival processing. Some of the steps they have divided into 3 phases in this study can sometimes be repeated or re-thought the more familiar with the collection the archivist becomes. Initial series ideas can be thrown out in favor of new ones, or maybe there are preservation issues.
1. Pirolli and Card’s model derived from study of sense-making in intelligence analysis places emphasis on the importance of expert schemas which are used to organize and represent incoming information. This model is considered as the stark opposite to the DIKW hierarchy. Now, if the DIKW is the widely accepted one, why is this model used instead of DIKW? What are the issues with DIKW that it cannot be considered in their study? 2. In the first phase of the findings of the research, the subjects were reported being much more fatigue than any other phase. Was this the result of information overload and inability to process all of them? Can it be related to the concept of getting adjusted to being observed and performing the think aloud protocol? 3. The top down approach involves reading documents created by or about the creator. The bottom up approach involves reading the records themselves. Which one of these is more efficient and why? On what basis are these approaches considered for a particular task.
1. What would visual analystics be like? On page 3, some scholars explain visual analystics as "encoding of data into graphical images".So, what differences or advantages do visual analystics have than traditional charts and diagrams, which have been used to present information for many years?
2.My second question is about the Fig.1. The author claims that "neither data nor frame comes first...When there is no adequate fit, the data may be reconsidered or an existing frame may be revised" But is there any method to decide which action should be taken first, reconsidering data or revising frame?
3. As mentioned on page 13 and 14, each archivist may make different choices based on their own experience, and the process itself may differ depending on the archivist. Standing back from the possibility, is it necessary for archives to unify their ways of making choices?
1. I have to say the conclusion of the paper is far from convincing, for only two participants are involved in the experiments. How could the author be so sure that the participants are the most typical samples of all archivists?
2. What does "archival fonds" exact mean? The term appears several times within the article while it's really hard for those who don't study in archival fields to understand.
3. Visualization is one of the key points in the article and no one would deny the necessity of visualization of information. However, are there any differences of exploiting visualization in archival field other than any other fields?
2. According to the google search I just conducted (we had the same question--what the heck is a fonds?), a fonds is an aggregation of documents coming from the same source. (Source: Wikipedia). While I believe all of this week's readings were intended for consumption by those in the archives community, a glossary or footnotes could have been helpful, especially for those new to the field.
1. In this article the author describes two different models that describe sense-making. The first model, put forth by Klein et al., describes the interaction between data and a frame, or mental model. The second model, put forth by Pirolli and Card, focuses two major loops of processing, the first of which focuses on seeking and filtering information into a schema while the second loop takes that schema and creates a mental model. The author also compares these models to the Data Information Knowledge and Wisdom model. Which of these two models do you prefer as a description of sense-making activities? How do these models compare to the DIKW model? 2. In the experiment that he author conducted in this article she examined two archivists who were working together to process a small collection of records. These archivists were instructed to continually think aloud their thoughts and what they were doing. However wouldn’t the fact that both archivists were working in the same room and speaking aloud cause problems for the results of this study as their processes might not be what they would normally do because they would be reacting to the thoughts and actions of the other archivist? 3. In this article the author proposes a visual analytic system to help archivists in processing digital materials. In the Crow et al. article we saw another method that was proposed to help the processing of archival materials in the APT system. Which of these two systems do you think would be more useful to archivists who are processing archival materials? Is there a way that you could perhaps combine the two systems? Would this combination improve the performance of these two systems?
1. I'm interested in the concept of the rise in unstructured data (p. 2). How has this really affected archives and archival processing-? How has this been approached professionally thus far, and has there been any methodology created to handle it? 2. In regard to the subjectivity imposed by individual archivists during archival processing and description--I wonder how helpful it would be to also examine frameworks for archival processing and the subjectivity/impact those have on the ultimate archival process. Surely the subjectivity (though real) of an individual archivist isn't the only factor that influences the descriptive process and how archival records are ultimately used and perceived? Perhaps there is a benefit to the archivist's subjectivity that could be captured? 3. In relying on computer algorithms in creating VA, I wonder if there are any anticipated consequences of incorporating technology in this way into archival arrangement? What would keep an archive from starting to use this kind of approach/methodology today?
1. The opening paragraph of this paper gives numbers on the increase of data and what percent of the data in the world is unstructured or transactional/structured data. What is unstructured and structured data? What would be an example of each?
2. Lemieux covers different ideas that have been posed by scholars in order to help in the arrangement and description processes of digital objects. One suggestion was that records management and archives should share metadata practices. I am intrigued by this idea in whether it would help save time during description processes. Have there been initiatives for records management and archives to better work together to combine metadata? If not, what have been the barriers to this?
3. During their study, Lemieux finds that there are three phases during archival processing: draft arrangement, refine arrangement, and description of the finding aid. I am wondering how the author is able to define the difference between these first two steps as they seem like one continuous process to me that might be difficult to break down into two parts. How do these steps compare to the 5 steps from the APT study?
1. In the methodology, the author states that the “study sought to explore archivists’ cognitive processes as they arrange and describe archival material as part of a broader design study on applying VA to the arrangement and description of archival material.” Is it possible to really understand the cognitive process of archivist through visual analytics? I also wonder how does experience and expertise affect the visualization of this data? 2. Reading about visualization and archives makes me think about the increasing independence on visualization for dealing with the interpretation on the abundance of data. Is it wrong to think that visualization can solve all of our information overload problems? Are there fields or sets of data where data visualization may wrongly lead us to the illusion that we can process the data better? 3. Reading about archives and the affect technology has had on profession this week made me think about how information professionals should have a basic yet strong understanding of how programs are created and modified so that they can clearly give their perspective to designers of programs or applications that are integral to our workflows. I think knowing how to communicate our ideas and desires in a technical manner could help the creation and design of the applications we need to perform our everyday duties.
1. Did anyone in this class attend the HiPSTAS workshop here at UT? Any interesting observations from it compared to this paper? https://www.ischool.utexas.edu/about/news/view_news_item?ID=408
2. Has anyone already experimented with crowd-sourced quality control in processing archival materials (p.3)?
1. I wonder how this process would have turned out differently had more experienced archivists been studied as well. WIth two fresh, out-of-the-box archivists, maybe the processes being observed were ones that follow a standard learned in the classroom but maybe not practiced with time. Examining older, more experienced archivists would have provided some insight as to how one works after years of experience and may provide more accurate information about archival processes in real life.
2. Could this type of observation and system development technique be used in areas other than visual materials? Do we think this same methodology might work for the collection and assessment of audio or video materials?
1. As we have read in previous articles, Lemieux discusses the push by some to give the archival process to the masses and let volunteers manage them in order to help reduce the current backlog. She presents a different way to approach archival processing, however, in the form of the VA. However, while I think this method is far more desirable than crowd sourcing or relying on volunteers, Lemieux even states that the VA method will most likely not “supersede other solutions aimed at addressing growing archival backlogs, such as crowd sourcing descriptions . . . ”, but rather “complement” them (22). However, when faced with alternative methods of archival processing (crowd sourcing, volunteers, etc.), shouldn’t there be a bigger push to implement and fund VA systems? Lemieux talked about the quality control issues with other methods (3). Wouldn’t the cost/effort of quality control measures equal or exceed those of a VA system?
2. Lemieux admits that a limitation of this study is its use of only two archivists. She states that, because of this limitation, “it is uncertain to what extent the preliminary findings are definitive or generalizable” (20-1). I got the impression that she felt this limitation didn’t have that big an impact on the study’s overall results. However, doesn’t it? Does the fact that this study uses only two archivists at the same institution negate its results? Sure, these archivists demonstrated a “three step” process instead of Meehan’s two-step one (15), but is this an isolated occurrence?
3. The proposed VA system has a lot of interesting features, including the ability to cluster items by file type and spatial layout, automated extraction of dates, and switching from cluster to thumbnail view (16-20). While these are impressive features, are they the only ones an archivist would need, or are they features the two archivists used in the study need? Lemieux mentioned the fact that the VA system is heavily “user-centered” (4) – does this mean each system could be different? Should it?
1 - Is there a way to quantitatively assess arrangement and description? I'm primarily thinking of a way to understand or evaluate how 'useful' or 'accesible' an arrangement schema is - while the article focused on arrangement behavior, we have no real understanding of what the resulting arrangement was. If we can't come up with a 'good' or 'correct' way to arrange materials, how can we develop tools to help us arrange documents in the first place?
ReplyDelete2 - I'm curious about all of the variables present in attempting to assess arrangement behavior. Lemiuex acknowledges the limitations of her study of only two individuals, but I would imaging there's an incredible scope of variables to control for, making studies of arrangement challenging. The type of material, archivists' experience, training and general understanding of their identity/role as archivists, subject specialties amongst professionals...have there been comprehensive studies done to assess arrangement behavior? And why, after years of archiving, have we just begun to discuss arrangement in these terms?
1. The author mentions that "There was not a strong focus on any specific template or archival style throughout the arrangement. Both subjects followed their own method without significant difficulties, and were able to integrate their work with one another." But if each archivist processes the collections differently with his/ her own style, won’t this affect the neutrality of the processing? While acknowledging that humans are naturally biased, should’t we establish standards to alleviate the bias?
ReplyDelete2. The author’s research follows the principles of human centered design. The design process is very sounded. But across the whole research, there were only two subjects. The author acknowledges this limitation. But the limitation I think is more than the limited number of the subjects, but also the particular demographics of these two subjects. Both of them are recent graduates, which may also invalidate the findings.
3. The author says “ While the observations were done only on physical archives as opposed to digital ones, relevant information can still be drawn out and applied to the design of a VA tool for archivists to use in working with digital archives.” I doubt the validation of this claim. Different media requires different sets of interaction, because the users would have different mental models when processing digital collections and physical collections. Though the author can borrow the metaphor of processing physical collections, his design still consider the different mind set of interacting with digital collections before the evaluation.
1. At one point, Lemieux points out that the two archivists used in the experiment have somewhat different approaches to archival processing due to their own, personal experiences. Many definitions of knowledge used in describing the DIK hierarchy use experience as a defining factor of knowledge. Having also cited the DIK hierarchy in her work, would it stand to reason that the different approaches are a result of different levels of knowledge?
ReplyDelete2. In the experiment with the two archivists, they both began to recognize certain grant documents as they assessed the collection. This visual recognition led to shorter periods of time spent determining what was in the collection, as they could now almost immediately ascertain what was a grant and what was something else. As the author described two different 'loops' that archivists are cognitively fed through, would this recognition of grant documents fall into the foraging loop or the sense-making loop? Is there overlap between the two? Does one feed into the other?
3. The visual analytics model is based on identifying how archivists think. An understanding of where archivists come from, how they think, etc. is integral to creating a useful visualization process. But considering that every person is different and has various experiences, is this the right approach to creating an effective model? Or is that the majority of people will end up favoring this tool, while others will just disregard it, because they don't fit into the generalized category?
1 There's a study cited by Pirolli and Card saying that experts do not simply review the data and match them to patterns directly from memory, instead, they select the relevant information and encode it in special representations. Would this cause a mess in the field of archive if not a certain rule regulate how to arrange and describe, by archivist with different background and education?
ReplyDelete2 It is concluded in the paper that visualization and VA hold much promise as tools to support a sustainable future for archives, particularly as cognitive aid to archivists in undertaking complex analytic tasks, like arrangement and description, which cannot yet fully be achieved using computational methods. And it is also stated in the article that VA is visualization plus computational methods. But what is the difference between them? How does this analysis or computational method really work compared with visualization?
3 I also have a question about related literature. The author provides a large amount of related literature in this paper. Thinking about our project, I am thinking how should we better organize the related literature? The author in this article seems like arrange these references in a certain way that finally guiding us to his topic. But among this process, I see some contents are not that related to the topic. How we decide what to use into our related literature? And would it be better if author put more detail or description of relation between related literature and the topic to let the readers have a clear clue about the process of how this paper developed?
1. On page 3 Lemieux states "have found that establishing, maintaining, and monitoring forums for user engagement are themselves very demanding tasks, especially if quality control is to be achieved" in regards to using volunteers to help with collection item descriptions. But does the archivist his or herself need to be the person managing the forum? Could that be done by another employee with occasional consulting from the archivist? This would free up the archivist to continue with needed work. It would cost less than two archivists salaries while simultaneously doubling up the work performance.
ReplyDelete2. Are there best practices for business to set up meta data information at the creation and revisions (life) of a document? Would everything that needs to be recorded be logged in MS Office products? If not archivists should work with the major software companies that a majority of users utilize to create documents (Microsoft, Adobe, etc) to get these features and information recorded into the history of the document. Perhaps automating the process as much as possible and requiring the user to enter information?
3. "This will provide us with data about the spatial regions that should be given greater weight in the design of our clustering algorithm" p. This seems that it would have to be very particular to the type of document. You have many spacial arrangements for identifying the different document types, so I'm sure that many different types of documents would overlap. The archivist would thus have to eliminate certain types of documents that the software would match against prior to using. For example, the grant application might be similar to another form type but is not applicable to this collection.
4. It seems that a skill set of archivists is visual thinking. To be able to recognize a document by what it looks like, where it is located, etc. visually. Can we also learning something from studying visual thinkers that may help archivists and the further explore the tools that they need?
4 - I like your point about visual thinking. Something that's addressed in AEI is the idea that we can recognize certain types of documents without needing to read the entire page (for example, letters have a very specific format) and where we should look for 'clues' about authorship, such as a secretary's initials in the corner that might allow us to connect the pages to a specific creator. It would be interesting to see where we could take these skills both in and out of the technology sphere. I think the proposed study of eye-tracking that Lemieux brings up would be an excellent source of information for how archivists are 'reading' collections.
Delete1. On page 2, Evans suggests "that instead of constructing detailed descriptions themselves, archivists should rely on web-based user contributions, establishing systems that use ‘‘the eyeballs and the intellect of… volunteers…throughout the world." This is a terrible idea for so many reasons. If these volunteers have no archival experience, wouldn't they be a hindrance and not a help? Some serious standards would have to be implemented if volunteers were to contribute. Volunteers help would probably create extra work and lost time because it would need to be checked by whoever is in charge of the department. I guess it varies by institutions, but are volunteers just creating extra work for institutions? I guess also though in many cases archives and museums are happy to get any help they can.
ReplyDelete2. The whole process of writing down the two archivists thoughts during their work was interesting, but I can't help wonder if this data is skewed? What information or speech were the researchers leaving out that they thought was unnecessary? I know I babble/ramble on to myself when working on things and I guess it's part of the research/though process, but I don't think it would be valuable/coherent enough to use to come to conclusions about research with. It also says sessions were conducted in the participants’ normal archival location, so as to achieve a more naturalistic recording of the methods involved in archival processing." I'm guessing the speech or academic/professional language might vary if the two archivists were interviewed in their own home.
3. On page 17, it says the observations were done only on physical archives as opposed to digital ones, relevant information can still be drawn out and applied to the design of a VA tool for archivists to use in working with digital archives." How so? Physical and digital archives are totally different things. Shouldn't there be a study done on the differences between research approaches to digital and physical?
1. In the part of methodology, two archivists participated in the study. Then, the research team observed the archival processing and summarized it into a three-stage sense-making process. However, I think the observing from only two participants is not reliable enough. Maybe these two archivists graduated from the same school so that they have too much similarity, but other archivists have different archival processing.
ReplyDelete2. Since the observation was done only on physical archives, why was the author so sure that the VA tool they design could be used in working with digital archives? The file type is the most obvious attribute when they design their VA tools; however, the file type in the computer is totally different from the physical file type. In a word, I think they have a very weak and unreliable foundation to build the VA tool.
3. The author mentioned that the resulting VA tool would be used as a test bed once the prototype was developed. It is reasonable to yield additional insight and test how well the solution works in the built environment; however, I wonder how to evaluate it and make sure the prototype is good enough. They wanted to use these insights to refine the model; I wonder how the next step would work, maybe with a large group of study participants.
1. With so much emphasis being placed on the expectation that metadata will be applied at the beginning of the digital lifecycle for VA systems like this to work I find it very surprising that we have chosen PDF to distribute so many documents in the 21st century. In order for us to start building better metadata it seems there should be some sort of intervention between file format creators and the archives world. Should we be building better finding aides into our documents at the start?
ReplyDelete2. There seems to be an interesting connection between the digital processing table in the last article and VA systems in that they both attempt to help aid in optimal arrangement scenarios for the archivist. It seems that context plays a huge role in how an archive is arranged and for this reason I am wondering if visual systems might be useful in helping to organize archives specifically for the interests of the viewer as opposed to trying to figure out a one size fits all arrangement scenario.
3. It seems that recent research into visual mapping could be useful in helping to create an algorithm that sorts through which areas of documents are most useful to archivists. It would be interesting to actually see a model of the areas that a trained archivist looked at when assessing documents as opposed to preforming research on a small group of new archivists.
1. Short visual cues can certainly be of use in locating objects among an organized or semi-organized grouping. Doctor’s offices have used files with colored tabs for the first letters of a patient’s last name for decades. Are such visual shorthand methods what the authors of this paper are proposing for archival groupings?
ReplyDelete2. Lemieux notes that visualization thus far has been directed at end users rather than assisting archivists in their initial work. This seems like a useful tool to aid both, though the tactile assistance gained from a physical object is lost with digitized versions. How might digital visual tagging be utilized to try and recreate that immediate identification?
3. Given the large number of born-digital and already-digitized archives that exist and continue to form, how can we move backwards to add these visual identifiers to pre-existing archives? The author suggests creation of a complex algorithm based upon usage studies of several archivist pairs but such a system would have to automatically go through sometimes literally millions of documents without error and would be expensive to develop. How much can this concept of visual identification truly be automated?
1. Currently in the era of big data, the traditional methods in archive field cannot work. But why does the author believe VA or visualization is a good approach to solve this problem? I haven’t studied anything about VA, but I’m interested in that. I doubt whether VA will solve the problem essentially.
ReplyDelete2. ‘The application of visualization and VA in the archival domain primarily has relied upon using archival descriptive metadata’ (p10). So why does the VA in the archival domain only focus on descriptive metadata? Is there anything else in the archival domain that can apply VA and visualization? What are they and what’s the difficulty when doing that?
3. When talking about the limitations, the author says that ‘the study used physical archives, and it is still unknown to what extent the processing of entirely digital collections changes the analytic process’ (p21). I think digital archive is the mainstream currently. So I wonder why the author doesn’t use digital archives to do the research. Is that difficult?
1) Because I don’t have much background in archives jargon, I was not clear on what Lemieux meant by “fonds,” and so that section was a bit confusing for me. Can we discuss what a “fond” is, and what the author means by “fonds versus record group and fonds versus series-based arrangement and description” (page 5)?
ReplyDelete2) I find it interesting that archivists’ decision-making process has largely gone unexamined in the literature. Especially after reading the Cook article, Lemieux’s observation seems to imply that the archival field has only recently ceased to take their decision-making processes for granted. This is very striking to me, as humanities fields have been (somewhat exhaustively) examining the question of how decision-making systems regarding history are installed and reinforced for a very long time now. Given the close ties between archival studies and disciplines like history, why have archivists only recently begun to conduct empirical examinations of their decision-making processes?
3) It’s fascinating how much difficulty empirical researchers seem to have nailing down the procedures and methods used by archivists, or by information professionals in general—in other words, their tacit knowledge! It seems that it’s difficult to systematically analyze tacit human reasoning, and that it is thus very difficult to synthesize or standardize it. How can this obstacle be overcome?
The simplest way I can think to explain this is fonds are what we call collections or record groups. While in the US and Canada we use collection to indicate the entire aggregation of documents, fonds is what gets used in France, the UK, and other European countries. Here, Wiki can help with this: http://en.wikipedia.org/wiki/Fonds
DeleteThe big sort of point of contention is that in Europe they have 'respect des fonds' which in a nutshell means they don't do a lot of work rearranging files in folders and changing the original order materials came in to the archives under, whereas depending on the archives here in the U.S. our point of view can be distilled as 'I do what I want' a lot. If you like I can send you a copy of some of my slides and notes from Archival Enterprise I because Ciaran knows a lot about this (and tries ever so hard to make us understand it).
1. This article introduced a new term which so far we have not discussed. What exactly are "archival fonds" and how are they used? Do they fall under description or organization?
ReplyDelete2. The author describes some very exciting new ideas for technological advances, for example computer algorithms for spatial layouts to help identify document types. However, the archival field has been greatly impacted by economic setbacks. Where do the proponents of these new technologies expect to find funding for these technological advances?
3. Meehan described two steps in the analytic process along with two stages -- top down and bottom up, but in Lemieux's study those steps and stages were not used by the subjects. Were those steps and stages just suggested practice by Meehan or had they been put into practice sometime previously?
1. We have read so many articles that emphasize how quickly the information we handle today is exploding, including this one. We can deny it, but have the authors considered that how much of it can be processed or filtered by computer without intervention from human? Is the fact really as serious as the number is talking?
ReplyDelete2. Visual analytic can display the information from graphic or more direct way, which is reasonably a efficient way for archival arrangement and description. But what we could do to keep the accuracy of the research?
3. It is interesting that the researches at UT Austin had developed a prototype VA system to aid archivists in identifying files requiring digital preservation and as a possible VA tool. Is it the APT system mentioned in other reading? If not, could the APT system be regarded as a VA tool?
On page 5, Lemieux mentions that “authors do not make clear how archivists actually reason about their concrete application of the concepts when arranging and describing different fonds.” Does this lack of explanation stem from an assumption as to the overall process undertaken by archivists as they go about processing archives?
ReplyDeleteGiven how human beings interact with the world, it makes sense that interacting with archival documents in the form of refoldering/rehousing helps archivists gather quite a bit of information about each document. Do the authors have a time measurement for this sort of visual and tactile interaction with the documents? And, if so, what is the optimal visual time spent on a document to gather enough information for creating a mental model?
The mention of GUIs and TUIs brings to mind the use of a tool such as the APT(Augmented Processing Table). Would incorporating a system such as the oculus rift, perhaps in the initial processing phase or beyond, and a device like the APT provide a tandem which would speed up the archival process? By having the documents/media etc. in a three-dimensional world and allowing an archivist to essentially “walk” through all the documents that need to be archived it could allow the archivists another level of visualization.
1. The author talks quite a bit about the form of a document and its ability to aid archivists in inferring contextual information. He goes on to suggest, or put forth, the idea of grouping by form. He gives the example of grant applications. I may be misunderstanding exactly what he means by form, but don’t archivists typically want to steer clear of arrangement based on document form? Maybe someone could help clarify how form aids in contextualization.
ReplyDelete2. Although the author recognizes limitations of his approach to the research and addresses them within the paper, it seemed, even so, that one team of archivists processing one collection could not possibly inform how visual analytics should be incorporated into archival processes, especially in translating from a physical to a digital environment. Regardless of the context for research, do you think the findings assist in making well-informed and thoughtful deductions in regards to next steps and future needs?
3. Towards the end of the article, the idea of a system which replicates materiality is put forth. The author suggests that materiality is a cognitive aid, but is unsure of the form in which this might be achieved as a feature of a VA tool. First off, I’m not entirely sure what is meant by materiality, as this term has such a different meaning within a digital environment. How can this concept of materiality be described or made explicit?
re: 3. I felt curious about the concept of materiality as well. For instance, how do you convey weight, smell, texture, etc? Do you even need to record those characteristics?
Delete1. "Another approach was suggested by Evans, that instead of constructing detailed descriptions themselves, archivists should rely on web-based user contributions, establishing systems that use 'the eyeballs and the intellect of… volunteers… throughout the world'" I really like this idea but I find it hard to believe that a project like this wouldn't come under some kind of attack by the Internet. Perhaps this is just me being a pessimist and I by no means think that this wouldn't be productive at all but I'm not sure I would trust somewhat anonymous volunteers to provide archival levels of integrity. Surely there would be some kind of approval process for information submitted this way but I also think that when faced with tons of information to verify, the human part of the process has a reasonable chance of failing.
ReplyDelete2. This article talks a lot about the process of archiving. The author mentions that "...the archivist gathers contextual information and...the(n) (the) archivist uses contextual information to generate an understanding of the various contexts of the records." This seems like it would be the standard approach to archiving. Are there any times where a different approach would be used?
3. "Visualization and VA hold much promise as tools to support a sustainable future for archives, particularly as cognitive aid to archivists in undertaking complex analytic tasks..." I heartily agree with this conclusion. I believe that the biggest hurtle to achieving this is by providing at a minimum, a similar level of functionality that an archivist can achieve on their own and improved, intuitive interactions that allow archivists to utilize the functionality provided in swift and easy manner.
1. I would really be interested in knowing whether or not there was a follow-up study to the one done in this article. I know, for me at least when I first learned processing, that the more times I repeated the action (the more collections or boxes I processed), the quicker I became at figuring out what I need to do. I would like to see them expand this -- maybe aside from just recently graduated archivists they could also sample people with varying degrees of experience in the field, or perhaps different areas of expertise?
ReplyDelete2. I also feel this article does not take into account the life cycle like nature of archival processing. Some of the steps they have divided into 3 phases in this study can sometimes be repeated or re-thought the more familiar with the collection the archivist becomes. Initial series ideas can be thrown out in favor of new ones, or maybe there are preservation issues.
This comment has been removed by the author.
ReplyDelete1. Pirolli and Card’s model derived from study of sense-making in intelligence analysis places emphasis on the importance of expert schemas which are used to organize and represent incoming information. This model is considered as the stark opposite to the DIKW hierarchy. Now, if the DIKW is the widely accepted one, why is this model used instead of DIKW? What are the issues with DIKW that it cannot be considered in their study?
ReplyDelete2. In the first phase of the findings of the research, the subjects were reported being much more fatigue than any other phase. Was this the result of information overload and inability to process all of them? Can it be related to the concept of getting adjusted to being observed and performing the think aloud protocol?
3. The top down approach involves reading documents created by or about the creator. The bottom up approach involves reading the records themselves. Which one of these is more efficient and why? On what basis are these approaches considered for a particular task.
1. What would visual analystics be like? On page 3, some scholars explain visual analystics as "encoding of data into graphical images".So, what differences or advantages do visual analystics have than traditional charts and diagrams, which have been used to present information for many years?
ReplyDelete2.My second question is about the Fig.1. The author claims that "neither data nor frame comes first...When there is no adequate fit, the data may be reconsidered or an existing frame may be revised" But is there any method to decide which action should be taken first, reconsidering data or revising frame?
3. As mentioned on page 13 and 14, each archivist may make different choices based on their own experience, and the process itself may differ depending on the archivist. Standing back from the possibility, is it necessary for archives to unify their ways of making choices?
1. I have to say the conclusion of the paper is far from convincing, for only two participants are involved in the experiments. How could the author be so sure that the participants are the most typical samples of all archivists?
ReplyDelete2. What does "archival fonds" exact mean? The term appears several times within the article while it's really hard for those who don't study in archival fields to understand.
3. Visualization is one of the key points in the article and no one would deny the necessity of visualization of information. However, are there any differences of exploiting visualization in archival field other than any other fields?
2. According to the google search I just conducted (we had the same question--what the heck is a fonds?), a fonds is an aggregation of documents coming from the same source. (Source: Wikipedia). While I believe all of this week's readings were intended for consumption by those in the archives community, a glossary or footnotes could have been helpful, especially for those new to the field.
Delete1. In this article the author describes two different models that describe sense-making. The first model, put forth by Klein et al., describes the interaction between data and a frame, or mental model. The second model, put forth by Pirolli and Card, focuses two major loops of processing, the first of which focuses on seeking and filtering information into a schema while the second loop takes that schema and creates a mental model. The author also compares these models to the Data Information Knowledge and Wisdom model. Which of these two models do you prefer as a description of sense-making activities? How do these models compare to the DIKW model?
ReplyDelete2. In the experiment that he author conducted in this article she examined two archivists who were working together to process a small collection of records. These archivists were instructed to continually think aloud their thoughts and what they were doing. However wouldn’t the fact that both archivists were working in the same room and speaking aloud cause problems for the results of this study as their processes might not be what they would normally do because they would be reacting to the thoughts and actions of the other archivist?
3. In this article the author proposes a visual analytic system to help archivists in processing digital materials. In the Crow et al. article we saw another method that was proposed to help the processing of archival materials in the APT system. Which of these two systems do you think would be more useful to archivists who are processing archival materials? Is there a way that you could perhaps combine the two systems? Would this combination improve the performance of these two systems?
1. I'm interested in the concept of the rise in unstructured data (p. 2). How has this really affected archives and archival processing-? How has this been approached professionally thus far, and has there been any methodology created to handle it?
ReplyDelete2. In regard to the subjectivity imposed by individual archivists during archival processing and description--I wonder how helpful it would be to also examine frameworks for archival processing and the subjectivity/impact those have on the ultimate archival process. Surely the subjectivity (though real) of an individual archivist isn't the only factor that influences the descriptive process and how archival records are ultimately used and perceived? Perhaps there is a benefit to the archivist's subjectivity that could be captured?
3. In relying on computer algorithms in creating VA, I wonder if there are any anticipated consequences of incorporating technology in this way into archival arrangement? What would keep an archive from starting to use this kind of approach/methodology today?
1. The opening paragraph of this paper gives numbers on the increase of data and what percent of the data in the world is unstructured or transactional/structured data. What is unstructured and structured data? What would be an example of each?
ReplyDelete2. Lemieux covers different ideas that have been posed by scholars in order to help in the arrangement and description processes of digital objects. One suggestion was that records management and archives should share metadata practices. I am intrigued by this idea in whether it would help save time during description processes. Have there been initiatives for records management and archives to better work together to combine metadata? If not, what have been the barriers to this?
3. During their study, Lemieux finds that there are three phases during archival processing: draft arrangement, refine arrangement, and description of the finding aid. I am wondering how the author is able to define the difference between these first two steps as they seem like one continuous process to me that might be difficult to break down into two parts. How do these steps compare to the 5 steps from the APT study?
1. In the methodology, the author states that the “study sought to explore archivists’ cognitive processes as they arrange and describe archival material as part of a broader design study on applying VA to the arrangement and description of archival material.” Is it possible to really understand the cognitive process of archivist through visual analytics? I also wonder how does experience and expertise affect the visualization of this data?
ReplyDelete2. Reading about visualization and archives makes me think about the increasing independence on visualization for dealing with the interpretation on the abundance of data. Is it wrong to think that visualization can solve all of our information overload problems? Are there fields or sets of data where data visualization may wrongly lead us to the illusion that we can process the data better?
3. Reading about archives and the affect technology has had on profession this week made me think about how information professionals should have a basic yet strong understanding of how programs are created and modified so that they can clearly give their perspective to designers of programs or applications that are integral to our workflows. I think knowing how to communicate our ideas and desires in a technical manner could help the creation and design of the applications we need to perform our everyday duties.
1. Did anyone in this class attend the HiPSTAS workshop here at UT? Any interesting observations from it compared to this paper? https://www.ischool.utexas.edu/about/news/view_news_item?ID=408
ReplyDelete2. Has anyone already experimented with crowd-sourced quality control in processing archival materials (p.3)?
1. I wonder how this process would have turned out differently had more experienced archivists been studied as well. WIth two fresh, out-of-the-box archivists, maybe the processes being observed were ones that follow a standard learned in the classroom but maybe not practiced with time. Examining older, more experienced archivists would have provided some insight as to how one works after years of experience and may provide more accurate information about archival processes in real life.
ReplyDelete2. Could this type of observation and system development technique be used in areas other than visual materials? Do we think this same methodology might work for the collection and assessment of audio or video materials?
1. As we have read in previous articles, Lemieux discusses the push by some to give the archival process to the masses and let volunteers manage them in order to help reduce the current backlog. She presents a different way to approach archival processing, however, in the form of the VA. However, while I think this method is far more desirable than crowd sourcing or relying on volunteers, Lemieux even states that the VA method will most likely not “supersede other solutions aimed at addressing growing archival backlogs, such as crowd sourcing descriptions . . . ”, but rather “complement” them (22). However, when faced with alternative methods of archival processing (crowd sourcing, volunteers, etc.), shouldn’t there be a bigger push to implement and fund VA systems? Lemieux talked about the quality control issues with other methods (3). Wouldn’t the cost/effort of quality control measures equal or exceed those of a VA system?
ReplyDelete2. Lemieux admits that a limitation of this study is its use of only two archivists. She states that, because of this limitation, “it is uncertain to what extent the preliminary findings are definitive or generalizable” (20-1). I got the impression that she felt this limitation didn’t have that big an impact on the study’s overall results. However, doesn’t it? Does the fact that this study uses only two archivists at the same institution negate its results? Sure, these archivists demonstrated a “three step” process instead of Meehan’s two-step one (15), but is this an isolated occurrence?
3. The proposed VA system has a lot of interesting features, including the ability to cluster items by file type and spatial layout, automated extraction of dates, and switching from cluster to thumbnail view (16-20). While these are impressive features, are they the only ones an archivist would need, or are they features the two archivists used in the study need? Lemieux mentioned the fact that the VA system is heavily “user-centered” (4) – does this mean each system could be different? Should it?