Thursday, October 17, 2013

10-24 Tefko Saracevic. 2007. Relevance: A Review of the Literature and a Framework for Thinking on the Notion in Information Science. Part III: Behavior and Effects of Relevance

32 comments:

  1. 1. Early in the article, Saracevic explains the set up of his review and the types of studies he did and did not include in the review. While I understand that the author couldn’t have included all studies on relevancy, were the studies included too limited? He states that he deliberately did not include studies on relevance implicity (“an action such as saving a page or document is regarded as implying relevance”) even though they are “related to relevance by assumption”. I think these studies in particular would have added dimension to Saracevic’s work, and would have like to see them. However, would such studies have made the review too broad?

    2. Nearly all the studies Saracevic provides in his review are conducted using university level students and faculty. Some include younger students (5th grade) or “the general public”. Later in the review, the author claims that this is because “with little or no funding, other populations are much more difficult to reach”. This is certainly true. Should the field of relevance research branch out to include other populations? If so, how do they ensure they are getting a comprehensive study?

    3. Saracevic states regarding individual differences that, individual differences is a, if not the, most prominent feature and factor in relevance inferences”. If that’s true, couldn’t the case be made that any document could be deemed relevant at any time by any given person? Is the goal of relevance research, then to determine how relevancy is determined in order to make “relevant” documents easier to access? Or is relevancy research aimed more at just observing the cognitive process?

    ReplyDelete
  2. 1. The discussion of the history of relevance studies includes summaries of many studies done from the early 1990s through 2005; however, the sample sizes seemed very small. Is this because of the difficulty of undertaking these searches with limited or no funding, which Saracevic discusses later in the article? He mentions that studies are often done on student populations, and that this may present a serious bias problem. I wonder if any commercial studies, such as those done by Google or Yahoo!, are of a greater scope?

    2. My question above was partly addressed on page 2140, where Saracevic says, "Increasingly, relevance is becoming proprietary because major search engines are proprietary. Information retrieval techniques used by a majority of larger search engines are well known in principle, but proprietary and thus unknown in execution and detail." He points out the irony of the "free, democratic, open" web contrasted with the proprietary nature of relevance research, undertaken and guarded by for-profit companies. What is being done to tackle this? Can any of this research be synthesized into the academic study of relevance without compromising the "trade secrets" of the corporations who control web searches?

    3. Saracevic talks A LOT about "TREC," but never defines it! Of course, I Googled it (har har), which turned up the Texas Real Estate Commisson; after combining "TREC" with "Saracevic," I figured it out: the Text Retrieval Conference. I had no idea that this concept of "relevance" - which, until these readings, was merely an interesting idea for me - was the subject of such intense and popular study! I don't really have a question; I just wanted to express once again my dumbfoundedness at discovering a whole area of inquiry during my first semester of iSchool!

    ReplyDelete
  3. 1) I thought it was a little funny that Saracevic continually contrasted students as study subjects with “real people” in “real situations.” I understand what he meant—that students’ information needs and behaviors are not necessarily the same as those of non-academic users and thus that user studies should not be limited to them. But student information behaviors are surely real and important, especially given that students and academics are among the most prolific information-seekers. Is it really fair to dismiss these behaviors as not being “real” when the problem is that they’re too specialized?

    2) In the conclusion Saracevic raises a point that has been gestured toward repeatedly in our readings—the tension between free information and the intellectual community on the one hand, and proprietary interests on the other (p. 2140). How is it possible for the more poorly-funded nonprofit academic world to conduct publicly-available studies on information retrieval and relevance, when the private sector is so intent on keeping its trade secrets close to the vest?

    3) It was interesting that, due to the continuing influence of behavioral psychology, so many empirical studies of relevance treat human subjects as a “black box” in the same way that computers are often treated as a black box (p. 2138). How can information retrieval researchers incorporate elements of, e.g., cognitive psychology into their research methodologies?

    ReplyDelete
  4. 1. Towards the very end of the article, Saracevic cites how students are predominantly used in studies due to lack of funding. I see why this is necessary, but wonder then how they can ascertain that people with higher expertise results in relevant finding with a higher overlap (p. 2137)? I ask this because students don't tend to be experts in the fields that they're studying - that typically comes with time and experience. So does the lack of studies on people who, well, know what they're doing and looking for, impact the study of relevance as a whole? If so, in what ways?

    2. I find it surprising that funding for projects researching relevance has been stymied. In a world where there is so much to search and sift through, you would think that relevance would be of higher and higher import. People are searching for specific information, but at times it is difficult to ascertain what is relevant and what is not due to the sheer amount of information available. Is it simply that newer, shinier fields of research have come to light, thus pushing relevance out of the way?

    3. On page 2138, Saracevic says that it is not just people that change their approaches or mindsets towards relevance, but that, at times, the same objects within a collection can present themselves relevant at different times. It is almost as if the objects being used to draw relevance have a life of their own. Is the point just that the same collection (like what Dr. Trace talked about during the lecture last week) can gain or lose relevance depending on the context in which it is presented?





    ReplyDelete
    Replies
    1. 1. I was wondering the same thing about the overwhelming number of students and professors that were the subjects of the study and how this would affect the results when higher expertise leads to greater consistency. I came to the opposite conclusion that you did though regarding the expertise/experience of the students. I had been thinking that graduate students are probably better at determining relevancy at searches somewhat related to their field of study, even if not their exact sub-specialty. I'm thinking that generally a history student could determine relevancy of articles on a history subject they haven't studied more so than a science they don't study either. Some of the cases presented in the reading do mention that the students were allowed to search for their real information needs. However, most of the summaries don't mention whether the search content would be related to the students specialties for us to know.

      Delete
  5. 1. ‘In other words, although relevance was not explicitly discussed, an action such as saving a page or document is regarded as implying relevance. (p2127)’ I wonder why relevance isn’t explicitly discussed? If relevance isn’t explicitly discussed, how could we study it?

    2. I am so interested in the image clues. Currently, as far as I know, there are only few search engines can search images and do a good job. So what’re the reasons leading to the ‘gap’ in image searching? And how could we make images relevant? I think the answer may be metadata which can change ‘images’ to ‘text’. However, can we just use images themselves to make them relevant?

    3. ‘A number of factors affect relevance judgments; for instance and as mentioned, Schamber (1994) listed 80 factors grouped in six categories and Harter (1996) 24 factors grouped in four categories, both in tables. (p2136)’ If there are so many factors, it will be so complicated when we do the judgments. Why are there so many factors? And how could we do the judgments under a so complex context?

    ReplyDelete
  6. 1.Page 2128 talks about image clues briefly. This isn't really a question, it's more of a pitch for a product that uses image clues extensively. If any of you don't know about the mobile app Google Goggles you should really check it out. It's pretty useful if you can't remember the name of something you are looking at. You take picture with your phone and the the app scans that picture and attempts (not always successfully) to bring up the correct name or information you were looking for. When I've used it, it's really useful for looking up paintings, books, and landmarks. With faces however it has a harder time, especially if the person you are looking for is kind of obscure. Check it out.


    2. On page 2134, under the section "Beyond Stability," it says "Are relevance judgements stable as tasks and other aspects change?... However, briefly, relevance judgements are not completely stable; they change over time as tasks progress from one stage to another and as learning advances. What was relevant then may not be necessarily relevant now and vice versa. In that respect Plato was right: Everything is flux. However, the criteria for judging relevance are fairly stable." If everything is flux how are the criteria for judging relevance fairly stable? I'd say that criteria changes just as much.

    3. On 2135 Saracevic says "How do you evaluate something solely on the basis of human judgements that are not stable and consistent? This is a perennial question, even a conundrum, for any and all evaluations based on human decisions that by nature are inconsistent, way above and beyond IR evaluation. As far as I can determine there are only six studies in some four decades that addressed the issue." In reading this article and the Croft article that too is my main question. But if, Saracevic is right about there only being six studies about this issue, how big of a problem is it really? If we gotten this far in the IR field then I'd say IR professionals have done a pretty good job with the unending search queries and entries they analyze. If there were to be more studies how long would they remain relevant if human judgement is always changing? Is the job of the IR professional always to do the best with they have considering the fact that human judgement is something out of their control?

    ReplyDelete
  7. 1 In this review, the author talks about relevance clues when considering relevance behavior. It says clues research aims to uncover and classify attributes or criteria that users concentrate on while making relevance inferences. And the author concludes some researches on this. As far as I see, these researches are all based on the observations of how people find a result relevant. And since the human thought is complex, is far more complex than the simple computer algorithm. So how could we figure out the result of human behavior or human thought to improving the computer algorithm? Do they really relate?

    2 The author also reviews researches about relevance feedback. And I have this question about how we decide whether a result is relevant? Is that a simple yes or no thing, or need to calculate the percentage of relevance? And to what degree relevant is relevant based on these researches?

    3 The author talks about the factors inherent in relevance judges that make a difference in relevance inferences. He also talks about the factors that affect relevance judgments, such as topical issue, binary issue, independency, stability, consistency. What the difference between relevance judges and relevance judgments?

    ReplyDelete
  8. 1. On page 2131, studies are provided that look at how relevance was rated differently between people of similar educational backgrounds in order to evaluate how different relevance was assigned within the group. The Saracevic and Kantor study of 1988 shows that 5 searchers were asked to find materials for the same questions. The resulting 18% of source retrieval overlap seems very low to me. The Gull study showed 30% relevancy overlap in another case. If there is this much individual influence in relevancy, how can studies or work that requires people to determine relevancy be trusted for consistency? For example, how does this influence how we see the accuracy of the legal document discovery that requires humans to read through documents to assess their relevancy to the case?

    2. On page 2137, Saracevic concludes that information presented earlier in a results search are more likely to be considered more relevant than those presented later. What does this mean for our ability to appraisal sources in commercial searches like in Google that has control over the order of search results?

    ReplyDelete
  9. 1. In answering the question of “What makes information or information objects relevant?”, the author summarizes there are two approaches, one deals with topical relevance only; the second includes cognitive, situational, and affective relevance as well. So are both these two approaches valid? If yes, why would there be two approaches in this field? Are there any advantages one has over the other?


    2. For the inconsistency of relevance judgment among individuals, are there any significant factors affecting the level of inconsistency, for example nationality, gender and etc?

    3. The author criticizes that "Most often, studies of individual differences in relevance inferences involved observing plain statistics of differences or degrees of overlap, with little or no diagnostics of underlying factors. “ But isn't the lack of diagnostics underlying factors in the literature he reviews because he concentrates exclusively on observational, empirical, or experimental studies? Namely, the gap he finds may be because of the nature of the researches he reviews.

    ReplyDelete
  10. This comment has been removed by the author.

    ReplyDelete
  11. 1. First of all, I have to say I highly admired the effort of the author of this paper for his 30-year continuous commitment on IR. After reading his literature review on IR, I'm interested in the difference of perspective on IR 30 years ago from now. What's the research points on IR thirty years ago? More on retrieval algorithm than user behaviors due to no user-friendly UI interfaces at that time at all?

    2. In the part of Relevance Clues in the section Relevance Behavior, the author introduced more than ten studies and each of them identified a range of categories of relevance criteria. However the total number, I assume as well as the content, of each of the categories is different from the others. In this case, is there an universally acknowledged categories of relevance criteria existing, even under a specific topic?

    3. In the part of Relevance Judges under the section Effects of Relevance, the author seemed to miss an approach of judging relevance with low costs known as Crowdsourcing. I think it's because the paper was written in 2007 and at that time crowdsourcing had not become an popular concept yet. In a crowdsourcing platform, say MTurk, anyone who obtains basic quality could become a worker, to judge the relevance of a given task published by the task owner. Its reliability has been verified through real cases. My question is which way would be better for evaluating the relevance of a number of given results, by experts or non-experts?

    ReplyDelete
  12. 1. The authors comment that one major limitation of the many studies performed on relevance is that they are nearly all performed on students, being an easy subject population to reach with genuine research needs. What limitations of relevance can we see in the influence of this population on larger relevance studies?

    2. Saracevic comments that a great deal of relevance research is on proprietary systems, such as search engines, which makes it difficult to assess how relevance is being utilized in larger real-world situations. Is there a way to persuade these proprietary engines to share some of their data with the research community without compromising “trade secrets” or exposing the engines to unwanted spam activity?

    3. Rankings by “relevance” from commercial engines often seem visibly influence by commercial concerns. Ebay, for instance, promotes auctions that have paid extra to be spotlighted in their chosen categories, and even while selling this service it must pay a large staff to scour listings for inappropriately categorized items. To what degree to these commercial influences create distrust of “relevance” in users?

    ReplyDelete
  13. On page 2128, Saracevic mentions a study done by Hirsh(1999) as tracking children relevance criteria. Given how young people are exposed to IR and their own development of a concept of relevance, would it be beneficial for the IR community to try and engage in longitudinal studies directed at tracking IR/relevance as it evolves for a group of individuals as they age and interact with different query needs?

    Validity is listed as a relevance clue from which a person might make a relevance judgment. Some validity is clear cut coming from well known educational institutions and the like but recently we heard about open access journals not vetting articles and allowing in a hoax article.With the nature of the internet, as it currently stands, how murky has the validity aspect of relevance judging changed over time?

    Along the same lines as my first question, Saracevic mentions how relevance changes over time and when comparing documents, in essence, users experience their queries pragmatically and shift according to new input. Since this seems to be the case, what has lead to the lack of research funding for relevance? Is it simply easier for IR researchers to focus on other aspects or is it , like Saracevis says, that funding is “spotty and without agenda or direction?”(2139)

    ReplyDelete
  14. 1. In many of the studies cited by Saracevic the authors seemed to study a user group with very specific needs. While this research seems important for obvious reasons I was a little troubled that few researchers attempted a more broad approach on key characteristics of general IR principles. I was particularly interested in the studies done by Barry and Schamber in 1998 comparing results from two studies to study similarities in criteria for relevance. With the advent of big search engines has research like this become more important? Are others doing similar work?

    2. I’m curious how researchers make the distinction between user error, system error, and design error? Or, do research assume that all problems are system error by design based on the general principle that systems should be able to overcome user incompetence?

    3. I was a little confused at the end of the article about the relevance of students in testing methodology. While I understand the massive importance of looking at other groups (doctors, teachers, business professionals, lawyers) all of these groups are represented in universities in most places. Understandably these students would be looking at different information but there has to be relevant crossover. It would also seem that students would occupy an important subset of users of IR systems as well.

    ReplyDelete
  15. 1 - In the beginning of the review, Saracevic mentions that information doesn't behave--people do. I'm curious if there's a way to design products to train research behavior, a la Google sort of setting up standards and expectations for how individuals seek and locate relevant information. Where does teaching the user intersect with IR and IA?

    2 - Given some of the criteria set forth on page 2130, especially in regards to information being "fun" or "frustrating" are users skewing the way we search for information? What I mean is, does the tendency to say, search Wikipedia for a quick and dirty understanding of an object, or the ongoing uptick in blogging and other forms of non-academic discourse have any effect on academic studies of information? Do we tend to serve user bases that are so specific, e.g. specific specialists or tasks, that some of the larger contexts only affect us in smaller ways? I've seen lots of projects from the iSchool about info seeking behavior as it relates to sites such as facebook, twitter, or online health forums, but this is totally out of my area of study so I'd love to hear from people who have a more nuanced understanding of these fields.

    3 - I'm intrigued by some of the assertions on page 2140 - primarily that the proprietary nature of search engines, relevance studies, and the ongoing to myth that the internet is a "democratic space" have almost constructed a myth disseminated to users about relevant information (and to get incredibly broad, privacy/knowledge production in general). Is relevance, much like information, becoming monetized? Is this something that we need to do targeted outreach to user groups about? I think there's a sort of societal complacency present in how ubiquitous searching the internet for answers has become, but don't know what conclusions can be drawn about how "monetized" relevance can impact our future.

    ReplyDelete
  16. 1. This article makes me think of how far we may, or may not yet, have come in terms of IR. Considering that it was written in 1997, do you think that information professionals, and in regards to those designing IR systems, still struggle in understanding or incorporating the user?

    2. The field, in a way, seems to have flipped its approach to education from the way Saracevic describes it on page 22. The focal point now seems to be information science with library courses appended, as opposed to his suggestion that it's library science focused with some information science classes tacked on. Would you agree? It's just interesting to see how the field has changed, or at least people's perception of it, within the last fifteen years.

    3. There is a paragraph on page 24 where Saracevic is talking about funding for research and mentions (in a seemingly heated manner) "research on intelligent systems and agents". What does he mean by intelligent agents and why is he so angry (or seemingly so)?

    ReplyDelete
  17. 1. One of the relevance clues that is mentioned in this article is how users weight the different search criteria. Is there any information on how to determine which criteria a user weights over another? When performing a search, do the default options for some search criteria get weighted? If so, how is it weighted and how is the importance determined?

    2. As time progresses on IR tasks is it always the case that discriminatory power for relevance increases? It seems to me that there could be some situations where a user could become less discriminatory as time progresses and they are presented with more information. This may then lead back to more discriminatory criteria. Almost like a cycle of IR.

    3. Binary relevance inferences are interesting to me. I think, based on personal experience, that it depends. In some cases I recognize a binary relevance behavior and in others I recognize a gradient that hopefully over time leads to the information I'm looking for. Is it possible to organize information into categories where a binary or non-binary relevance judgment is more common and present results based on that behavior?

    ReplyDelete
  18. 1. There were two approaches to judge whether information and information object relevant or not. The first approach deals with topical relevance, while the second includes cognitive, situational, and affective relevance. I wonder that the does the second approach includes the topical relevance. If so, the second approach is much better and more comprehensive than the first one. Why we still use the first approach and only consider the topical relevance in some situations?

    2. I’m very curious about the conclusion from the study of image relevance. Since there are different stories behind every picture, I wonder how can they identify seventeen criteria after their research. Even if there are seventeen criteria, I think it could not work every time. Sometimes, we think the images have similar color are relevance, but the images have similar color could have no connection at all.

    3. The development of IR is highly related to the development of IT. As the author said, larger databases and more flexible interface could impact IR algorithmic. However, I doubt that the development in IT could make a change in relevance behavior and effects or not. In my opinion, the development of IT will not change irrelevant information into a relevant one.

    ReplyDelete
  19. 1. Saracevic discusses "relevance judges." Who are these judges? Do they make a living judging relevance and what exactly are their qualifications?

    2. When summarizing consistency the author warns "Never, ever use more than a single judge per query." Why? The text leads us to believe that it was because of disagreement which would simply support inconsistency. Does it go beyond that?

    3. Apparently 'the order in which documents are presented to users" influences their opinions of relevance. Isn't it possible that in the experiments listed, relevance judgments may have been affected by order? Also why is it that when the number of documents presented is small, order does not have any influence?

    ReplyDelete
  20. 1. In the different observational studies relating to information relevance, the amount of users interviewed ranged from 10 to 300, and the topics and objects included maps, documents, articles and so on. I am wondering whether these various studies could be comparable to draw a conclusion on 'relevance clues'?

    2. On the end of Page 3, Smithson observed differences in judgments at different stages: Initial, Final, and Citing. However, the author does not offer detailed explainations about these stages. How did Smithson define these three stages?What actions or processes were involved in each stage?

    3. When discussing the IR evaluations, the author claims five relevance judgments: Topical, Binary, Independent, Stable, and Consistent. My question is how to quantify these aspects for computer searching during the information retrieval process? Is there an rank of influence of these judgments?

    ReplyDelete
  21. 1. Are search results that are presented earlier, as Saracevic claims, actually more relevant to our searches? How can they be when we live in a world where most search engines have commercial aims and algorithms that basically favor the results that are already on top?

    2. How would studying convenience samples of students really alter your findings on IR?

    3. What did Saracevic means when he said that information doesn’t behave, people do?

    ReplyDelete
  22. 1. On page 2127, Saracevic admits immediately that 'relevance does not behave. People behave'. I find this statement indicative that the author is taking into account the 'humanity' of the subjects of his study (since a lot of what we have read does not always sound like it does the same, to me). With that in mind -- how often have studies been framed to try to understand relevancy for a person better by studying people?

    2. How much of a search engine/database's UI have an impact on how well a person searches with it? For instance, I've had patrons before who will swear up and down by a certain database as being the best at what it does, and loathing one that I myself prefer to use. When pressed to describe why they felt that way, I've had users tell me generally the other database 'confuses' them... even if the searches I do for them in that database retrieve information highly relevant to their assignments. Does the appearance and structure of a site pre-emptively influence a user's attitude, and thus their opinion on the relevancy of their searches with it?

    3. If more studies had access to a larger pool of varied individuals (since a lot of these were strictly students), would it be possible to estimate the relevancy of items in a search to an individual if we'd somehow broken that person down into an archetype? Have private businesses tried that, hence the types of ads one might see pop up?

    ReplyDelete
  23. 1. At the beginning of this article, it refers to two approaches used to determine what makes information or information objects relevant. The first one, topic approach, deals with topic relevance only; the second one, clues approach, deals not only topic relevance, but also other relevance. So why do we still keep the topic approaches?

    2. About the relevance dynamics, the author gives us so many examples that seem to have distinct conclusions. The studies by Bateman and studies by Vakkari and Hakala indicated that the distribution of criteria changed slightly. However, the report from Tang and Solomon found that there were dynamic changes. So what conclusion could we make from these examples?

    3. We talked about ‘recall’ and ‘precision’ in our former class. In this article, author has dismissed the binary premise of relevance on the basis of everyday experience. Since the relevance is not binary any more, how could we revise the calculation of ‘recall’ and ‘precision’?

    ReplyDelete
  24. 1. Of the various kinds of relevance that is discussed, topical, cognitive, situational, affective, which one most accurately answers the question, what makes information or information objects relevant? Why, so?

    2. Dynamic changes in relevance are studied while assuming two things. Do these assumptions lead to an incorrect inference? How can we avoid that? What are these assumptions based on? While studying relevance dynamics, are these necessary to be taken into account?

    3. The response to the question, if topicality is the most important attribute in relevance inference by people, says that topicality is not at all an executive criterion or attribute in relevance inferences by people. Several other relevance attributes are said to play a role which are used in conjunction or interaction with topicality. If other attributes are linked to topicality in one or other way, does it imply that topicality is the center of relevance inference and all other attributes derived from or linked to it?

    ReplyDelete
  25. 1. I am curious as to how the research findings presented in this literature review vary when looking at specific demographics and populations of people. The author notes that most studies were conducted with students. With that, within the context of information science, I wonder if and how relevance behaviors change during one's life and development?
    2. What kind of cultural appropriations are being made within the process of evaluating relevance? Is there a framework or structure from which this entire process is grounded, and in what ways does this influence how people engage with it?
    3. Since funding appears to be an issue for studies of relevancy, how have those invested in this field of study championed their cause? And furthermore, in what ways do/have the findings of these studies influence the field of information?

    ReplyDelete
  26. 1. In this article the author discusses the idea of Relevance Feedback. He stated that there are two different types of Relevance Feedback (RF) that are manual RF and automatic RF. Manual RF is from user responses and an algorithm accomplishes automatic RF. What are the pros and cons of manual and automatic RF and what type of situations would you use one type of RF instead of the other?
    2. In this article the author states that as the level of expertise that relevance judges have in a subject increases so does the level of agreement that they have about the relevance of a document. However it is possible that they agree on the relevance of a document merely because of bias that they have because they all work in the same field. How would one go about compensating for this type of bias in a evaluation and is it important enough to eliminate this bias by using a varied pool of relevance judges in exchange for a lower level of agreement on the relevance of documents?
    3. The author states, in the reflections section, that many of these studies were similar to if not based on the behaviorism model of psychology. However he goes on to state that the behaviorism model went out of fashion in the scientific community because it made several assumptions about the simplicity of human nature. Should the field of IR evaluation research attempt to distance itself from this model because of its outdated ideas? Does the fact that much of the research that has been done in this model has been successful mean that IR evaluation research should continue to use this model?

    ReplyDelete
  27. 1. After reading through the experiments they conducted to research on “relevance”, I found it is quite similar to card sorting when designing the global navigation in a website. We give users about 20 items and five categories, and let them to classify the 20 items into the five categories. Then you will find no two results are exactly same, but some of the results may share some degree of similarity. In card sorting, we need to mediate to find a harmonious result, and is that the same for research on “relevance”? What scholars will do after they find the “relevance”?
    2. In the Relevance Dynamics, the authors mentioned that “As user progresses through various stages of a task, the user’s cognitive state changes and the task changes as well. Thus, something about relevance also is changing.” I cannot agree the causality they pointed out here. We cannot draw the conclusions that the reason why something about relevance is changing is because cognitive state changes and task changes. The author neglected other factors that may influence the changes in relevance, like cultures, technology, etc.
    3. Another question popped up when reading the Relevance Feedback section. When we present information in a website, we should prefer to have more relevance feedback or less? (deeper or broader question)

    ReplyDelete
  28. 1. The author states “relevance behavior studies are closely related to information seeking studies and to the broad area of human information behavior studies.” I’m interested in knowing what disciplines contributed to these different areas of interest and how that influenced each area that affects relevance in searches? Is there enough communication between the interdisciplinary areas and how are iSchools a possible answer to this problem?
    2. Reading the articles this week made me think about the goal of libraries and information literacy. Traditionally libraries try to teach patron how to search in different ways in order for their patrons to find the relevant information they need. Will this always be necessary if we perfect our information retrieval systems or will language always be evolving so that we will need some intermediary to teach us how to search?
    3. Looking at the interdisciplinary nature of information retrieval, is there any one discipline that has an advantage over others? Depending on how the search is conducted any number of expertise should be consulted. I begin to think of different socio-economic groups and how the construction of a question would impact the answer. It seems a system would need to understand the lexicon of different generations and groups along with the ability to search and ultimately interpret images to give people the results they ultimately need. I wonder what fields have yet to be included that could make a difference in this field? Are the researchers of information retrieval ignoring discipline that they shouldn't?

    ReplyDelete
  29. 1. It's interesting to consider the human elements that search engineers must take into account. In talking about relevance, the author reminds us that "relevance doesn't behave; people behave"--something to keep in mind the next time I get mad at Google for not returning the results I need.

    2. What kind of privacy implications are there in manual relevance frequency measures? I'm assuming that, for manual RF to work, a web search engine would need to keep track of the series of search terms used in a single search session. While obviously this kind of information would be useful in refining a search engine for later use, does this tracking of search terms influence human behavior--for instance, causing users to avoid particular searches, etc.?

    3. The research cited in this article is concerned with the most important function of a search engine: returning results the user deems relevant. However, is it/will it ever be possible for a search engine to allow for serendipity? I imagine this would look something like a search engine returning a wide swath of documents in response to a short or vague search term, allowing the user to browse information that is interesting, rather than responding to a particular query. Of course, in most instances this would be unwanted; however, might there ever be a Google with a serendipity switch that users can select when wanting to browse?

    ReplyDelete
    Replies
    1. RE: 2. The tracking feature in Google's search engine already deters me from using it for some research in normal mode. When I wanted to look at the new interface for MySpace, I decided to use incognito mode (CTRL+Shift+N) so that it does not apply the search history to me. In that case, I did not want Justin Timberlake ads appearing in paid search positions in my other searches or page visits. (This is called retargeting in online marketing terms.)

      Delete
  30. 1. Was the bulk of conclusions from prior research around the time of "Part I" based on "examples, anecdotes, authorities, and contemplation" rather than on data?

    2. Would the generalizations of the relevance clues (p.2130) still hold if the search was for audio or image rather than text?

    ReplyDelete