1. In the beginning of this article, Andersen and Mandich offer us a summary that reads "Organizations must get the right information to the right person at the time required to make the right decision" (p. 1). But if the idea or information is still around, but hasn't been received by the right person at the right time, does it then become superfluous? The article makes it sound like if you don't get the timing right, then your idea or knowledge is basically useless. Is there another time in the Information Life Cycle when ideas/knowledge could be introduced?
2. Having read quite a few articles on the theoretical DIK hierarchy, it is interesting to see that in this article, information is created from knowledge. Someone has tacit knowledge which is then (if the timing is right) transformed into an expert system or explicit knowledge. Andersen and Mandich's article focuses on information management, but how does that differ from knowledge management? Is there an argument of information vs. knowledge management?
3. On page two of the article, the authors briefly mention three different steps of the information life cycle - to formalize, rationalize, and discard - that are important to information management. I feel that they addressed the issue of formalizing and rationalizing information, but didn't place the same emphasis on the last step. Is it that people don't like to talk about discarding information? Is it somewhat of a taboo subject in information management, like the place where information and ideas go to die?
3. The author says that "Yesterday's discarded idea might be the IP/IC that saves the company tomorrow!" I think for discarding the information, he doesn't mean destroy the information or throw it away, but to push the information into less critical systems when it is no longer relevant.
1. Disconnected bits to connected bits, what about skills or know-how information. The skills of how to conduct the brain surgery is gained by years of experience training and different from the information documented on a website or a book. So the information changes a different sort of information when moving up to different stages of the life cycle, right?
2. I find this article very useful to understand the knowledge management in large corporation. I work for JCPenney, a very large company with thousands of employees in various departments. As a user experience designer, I have to take account of different stakeholders when conducting the project. But sometimes I find it very hard to find the right contact person to ask his needs for a certain problem. That person definitely has an idea what he wants, i.e the tacit knowledge. But since we cannot meet and talk, his tacit knowledge and my tacit knowledge cannot evolve into the explicit knowledge. Thus we cannot find the proper solution.
1 - I'm curious as to how information professionals can disrupt, or improve the flow of information within a company. Several UT graduates have gone on to corporate librarianship positions that were recently created within a company, so how does a new employee start from the ground up when it comes to consolidating expert knowledge, or becoming a trusted resource for information? For example, is it more efficient to just say "John in IT has the answer to that" or does the IR professional tend to provide more comprehensive answers? Are there any people who have worked in corporate information/knowledge management settings that could speak to this?
2 - How can organizations better recognize who has useful or valuable knowledge to share? When I was working at Levi-Strauss, they were in the process of trying to create a "Yes, but how?" culture - that is, a culture focused on saying yes to new ideas to improve efficiency. However, I found that lower-level employees were often not taken seriously, even when their positions were full of inefficient processes that could have been streamlined, but if the same critique came from say, their manager, people instantly paid attention and worked to fix the problem. How can we determine "expertise" in a business environment so that title does not become the only source of a person's value?
In regards to your second question, I think there is a difference here between ideas and knowledge. If your company wants ideas for new concepts, then ideas could come from where ever. However some information or knowledge I would only trust if I knew the title of the person or the department it came from because of their specialization in that subject matter. I wonder in knowledge management systems how the authority of the writer is captured or transmitted with the information.
In my experience though, companies seem to value the knowledge of someone who has been at the company longer than someone new, even though the new person may have had more experience at other companies or has better training. This plays into your first question that I think it is difficult for a new information professional in a company to gain the trust of others to become a standard resource for those in the company looking for information. Whether it is starting a library/archive/knowledge management system in a company, I think it just takes time for others to see the value, and this can only be built by providing good service.
1. Andersen and Mandich assume that the impact on the organization is controlled externally by the information life cycle, which has stages, where information is created, delivered and managed. Can we add one more stage to this life cycle, where information is curated? Will this make any changes in the assumption?
2. The author says that when two humans are communicating,there is an infinite exchange of metadata to find,communicate,contextualize and consume the information. In order to mimic this in IP/IC systems, a layering process is suggested. What if the metadata processing is inefficient in these layers and leads to more imbalance in the creation and management processes?
3. While assigning value to a source, the important factor that must be considered is the proximity of the source.There arises a concept of TTL(Time To Live) which must be assigned and changed depending on each improvement of the document. This concept can give us information on the accessibility of the source, but in what way does it convey about the location of the source?
1. In Mandich’s introduction, he discusses the difference between “tacit” and “explicit/connected” information, giving an example of tacit information (and an expert system) by saying that it “might be locked in the head of senior employees who have been completing a specific task for many years”. Mandich seems to dislike this expert information system in favor of connected information, as he talks about companies who have “evolved past the expert system” and claims that “organizations with excessively formal IP-creation processes [such as expert systems] often tend to have a longer gestation period for their IP”. However, is the expert system necessarily a bad/inefficient thing? I can see the proximity issues it might create, but if you’re talking to one person who for sure knows the information you need, isn’t this better than having to search for it?
2. Mandich also discusses the “cost of creation” of information in his article. He describes (and gives diagrams of) two different models of the information life cycle that organizations can use – one with and one without “buffers”. One limitation he gives for the buffer model is that “an idea and the implementation concept behind the idea can often be fleeting. Complexity or blockers applied early in the information life cycle can force good ideas to escape without ever being realized, or not realized in time”. While this is certainly true, wouldn’t the blockers also keep bad ideas from being realized too quickly? Is it better to have a good idea realized “in time” or to keep a bad idea (which could potentially damage an organization) from being enacted?
3. In the “Complexity versus Management” section of his article, Mandich describes what happens to information that “loses relevance” and drops into the lower tiers of system, becoming “unmanaged”. While unmanaged information may certainly age, Mandich asks the question “does it ever die?” I would like to attempt to answer this question by arguing that it doesn’t. When I first read this, I immediately thought of the Yahoo message boards that date back 10 years but are still “up and running”. Also, just simple facts such as how we used to believe the world was flat. Obviously, the world isn’t flat. That information is no longer relevant, but it isn’t dead either. However, does that just fall in to the “archived” information category that isn’t “used” but is still important? It could be argued that “dead” information is forgotten, but does that truly mean it died? I feel that forgotten information could still be in the realm of information that hasn’t been learned yet. It’s there – we just don’t know it.
3 - While I would agree with you that information doesn't technically die, there's also a bit of a spectrum at play in terms of information's relevance or validity/truthfulness. I think the difficulty in answering the "does information ever die?" question is that it eventually becomes rhetorical, much like the tree falling in the forest. We could ostensibly argue that all information is already there and we just "don't know it" yet - cures for diseases, mapping specific genes, plant species, all of this information is already available but we haven't necessarily discovered it. I think the important point Mandich could be making is that information that drops out of the sphere (extinct languages, "the world is flat" "imbalanced humors cause depression") has been deemed by society to be no longer relevant or useful, so the information itself is 'dead' in terms of relevance, but our awareness that it at one time existed and was accepted as factual information isn't. If I eat an apple, the apple ceases to be physically present, but my memory that yes, I ate an apple yesterday, remains.
From a practical standpoint I think information that loses relevance for the here and now becomes contextual information that permits us to build on newer information, hence the "does it ever die?" question of Mandich's. Following the "world is flat" example - we've already tested and disproven this theory, so 1) we won't test it again and 2) we can use the fact that the Earth is spherical to help us understand and investigate additional planets - we won't be expecting to find a rectangular planet in our solar system. It's not so much that the "Earth is flat" is specifically relevant information, but memory of the hypothesis' failings are, if that distinction makes sense.
1. I also questioned Mandich's dismissal of tacit information systems. His description of an expert information system reminded me of employment trends spawned by industrialization; jobs took less and less general knowledge and more very specific expertise. More cogs were needed to run the machine. So maybe Mandich is picturing an early 20th century system of expert information? Oy vay. I think the more company knowledge we can make explicit through wikis, etc. the better but that we will always turn to specific individuals for their expertise in certain areas that can only be learned through experience.
1) The article alludes to but ultimately skirts the debate over whether information ever dies. I think this is an interesting question worth exploring. From a historiographic perspective I would argue that it is valuable to know what was known or believed at a given time, even if that information no longer has day-to-day applications. However, the article seems to come from a more corporate perspective, which may not find the same value in this information. Is there a right or wrong answer to the question of whether information can die?
2) This article seems to take a different view of what constitutes “tacit information” than previous readings have. For instance, it lists information on a computer’s local hard drive as “tacit information,” whereas past readings have characterized tacit information as unarticulated and taken-for-granted information internal to an individual. If the information has been articulated outside of the individual and is stored in a relatively easily-retrievable format, is it “tacit” or just disorganized?
3) The “heavy process” method of dealing with complex information seems to be criticized here, with the example of the slow FDA drug approval process possibly costing lives. What the authors don’t seem to consider, though, is the fact that the FDA approval process (while cumbersome) is designed to prevent insufficiently-tested medicines from causing harm to patients. How would the authors suggest mediating the needs for speed and precision when processing information when both of these considerations can be literally a matter of life and death?
1. Andersen refers to knowledge held by a single person within an organization as “tacit knowledge.” This knowledge must then be made explicit and formalized to create genuine intellectual capital. However, many individuals within an organization may be inclined to hoard their individual capital and avoid formalization to retain their value to their organization. How can fears of individual deprecation be alleviated to ease formalization?
2. Andersen cites the FDA as an example of an overly-long IP/IC creation process whose long approval process can cost lives. Meanwhile, the FDA frequently faces accusations of regulatory capture and even that long approval process is fraught with methodological arguments in drug studies. The FDA is an external IP creator meant as a gatekeeper for private actors with a profit motive in rapid approval. How can the FDA better combat methodological challenges to accurate creation?
3. Documentation of processes has proven to be an efficiency booster for many organizations, and often a necessary step as agencies expand and formats must update and change. At times, an external consulting firm may be brought in to assist in documentation processes. Does this smooth the IP creation process, or does the monetization of the process result in an elongated cycle?
1. Easing the fear of being made obsolete would be hard to do in an organization (and perhaps industry) that treated its employees like single-use printer cartridges - as if you are no longer valuable once the information is out of your head. I have a friend who once interned at a major software company in town, and he wrote software to do the work that he was hired to do. In one sense, he made himself obsolete. With that in mind, which organization has more hope of improving its management of information? An organization that would downgrade his value upon discovering that this employee automated his own job? Or an organization that would give this employee the chance to work on more interesting and tougher projects because he has proven himself to be valuable in other ways than the company originally recognized?
1) In this passage "Information that is no longer relevant can be quickly pushed out of the critical systems and into less critical systems, pending their eventual retirement... two sides of the same coin, then, are light process (publication in a journal), which quickly moves like a life-saving surgical technique to a larger population, and heavy process (FDA), which might drag on for years before allowing a life-saving drug to emerge." Can it be argued that information with long-term/long-reaching effects for a person warrant taking a longer time to verify and validate? A surgical technique's greatest impact on a person is during surgery, but a drug that person takes to combat a long-term condition like arthritis makes a great impact on a regular basis.
2) There is a lot of talk of the value of a source being the 'age of the information'. Is there a way to quantify how old information can lose value in one field but potentially gain equal or greater value in another? Has there ever been a focus on how information's importance does not necessarily erode but instead shifts? Where would archives fall in the information lifecycle depicted?
3) The very beginning states "the impact of an organization is controlled externally of the information life cycle -- and, for that matter, of the organization itself -- and cannot be controlled by the organization." How would organizations such as subscription databases, whose purpose and business is the controlled access to information, fall into this mode of thinking? Do they not by their very nature impact the information life cycle and control it?
To your question #2, I asked a similar question (below), and I've been wondering about this throughout the class, as we discuss "useless data," "old information," and even "incorrect information." All data has potential value, even incorrect data (it might, for instance, help historians understand category mistakes, technological glitches, or dogmatic assumptions made by researchers of prior generations, which could be historically important). I agree that pinpointing "where archives fall" within this paradigm is a vital question, and I hope we explore it more in class. This sounds like a great topic for a (perhaps rather theoretical) paper.
1. The author defined two types of information created and they are tacit and explicit information. Regarding tacit information, the author held three examples and there are Human memory, Local hard drive of the computer and Expert system. From my understanding, the author concluded those information not widespread as tacit information. For such a type of information, does it suffer high risk of disappearing before it comes to next step of information life cycle? Or does it mean tacit information is more vulnerable than explicit information?
2. I used to work in a large firm in China and I met so many things that clog the information flow within the company. For example, we created documents of process to help new co-workers get on right track as soon as possible, while paperwork overload undermined work efficiency. We could focus on thinking and dealing with more important work than recording what we have done at the same time. In this point, how should we value and make a choice between the jobs that improve efficiency and work that streamline information flow?
3. To outline the dissipation of information, the author compared it with throwing a baseball, by which the author tried to conclude that how far the information that a user requires is from what they need determines the proximity and timeliness of that information. It might be true in applying to a simple model in which only one information receiver and one thrower are involved. However, could this example cannot apply to a net like model in which each point on the net could be a receiver and a thrower as well? The case pushed me into such a question is that even a small rumor could result in a storm on Twitter.
“Organizations must get the right information to the right person at the right time to make the right decision.” I would replace the word, “right”, with the word, “best”. Sometimes no information is right or there is no information at all. Sometimes designing information delivery to the “right” person is not as good as getting information to the “best” person or people. And are there always right and wrong decisions?
“If you doubled the distance, the pitcher would not be as effective and would have to leverage a completely different type of pitch.” - It seems like archivists are pitchers who get called in when the distance to from the mound to home plate changes - which seems to be all the time.
1. At the very beginning of the article, the author introduced the concept of “tacit information”. I agree that knowledge in an expert system is tacit, but I do not think information begins its life in a tacit form in all cases. For example, the number of a company’s inventory is new information that is not tacit. Since all following discussion on information life cycle started with disconnected information, I guess the author meant the knowledge, which begins its life in expert’s head, is tacit and disconnected.
2. In Figure 4, the author attempted to discuss the cost of IP/IC management but did not demonstrate clearly on the relationship of cost of management and the two factors. If the goal of IP/IC management is moving information from lowest to the highest levels, the Explicit Knowledge seems to require less management than the Tacit Knowledge. If so, the cost of management of Explicit Knowledge is less than Tacit Knowledge, right? If my understanding is correct, I wonder what role does “Required or mandated IP/IC process” play. In the figure, “Required or mandated IP/IC process” is the reason why knowledge became explicit or tacit. Could the cause and effect be two independent factors?
3. In Figure 7, this article discussed the management and complexity again and talked more about another factor: Relevance. On the one hand, the author said the management requirements for information would increase if the relevance of information increases. I think the author should add a premise of the unchanged complexity. The first sentence had the same problem. On the other hand, I could hardly understand the curves of ”low relevant” and “high relevant”. Why do the curves of Value infinitely close to the curves of Relevant? Does it mean the relevance is increasing or declining? If so, the curve showed that when relevance increases, complexity would increase. However, it is not true. I have no idea how to fix this figure but I do think it has some problems.
1. The authors write “Many organizations rush information from its tacit state to a formalized state much too quickly. The reverse is also true with organizations that slow the IP/IC creation process?” However they do not provide examples of this in organization. What is an example to information moving from tacit to formalized? Is this a bad thing?
2. How does time restriction or limitations on users or creators of information in an organization affect the IP/IC processes that Andersen and Mandich describe? They write of the costs, and time could be considered a cost, but they dont talk about how employee time restraints could affect the consuming information process or in validating a source.
1. An element that seems to be missing from creating an effective information management system is promotion and education. It takes time to add the meta data, key words, time of life, time of expiration, creator and refresher, etc. Unless employees see the value to adding this information, then managing the information within the knowledge-management system will never truly be successful. Like Anderson says in his biography - "They felt that the taxonomy and other requirements were too stiff and rigorous. Yet, on the other side, the same people complained that the search was ineffective." Essentially, they want the benefit without the responsibility of the work.
2. Anderson lists impact of the information on the organization as a tier in the life cycle, yet doesn't go into detail when discussing the tiers. If this area was stressed more, perhaps it would influence change. For example, if it was a job requirement of all employees to document their ideas and idea development, then maybe they would spend time and energy developing a system that expedites this documentation. Perhaps some items could be worked into the system to be auto-generated through machine login and other means.
3. In our previous articles we have discussed a difference between information and knowledge. Here the authors use the term interchangeably. Is there a difference in process and life-cycle for information virus knowledge? Or can the same process be used for both? How would the process be different for knowledge, are their elements or steps let out?
1. Intellectual Capital and tacit knowledge are clearly closely related according to the theoretical framework of this white paper. Are these terms identical, synonymous, overlapping, or merely related? Are their definitions situational? How would we import these terms into our specific fields (libraries, archives, IT, etc.?). Thinking about how these terms are used herein, have they, in our experience, pointed to identical concepts? I’m thinking about actual work experience (regardless of field).
2. In the section titled “How Is Information Delivered?” the authors say: “Essentially, when two humans are communicating, there is a fluid and almost infinite exchange of metadata to find, communicate, contextualize, and consume the information. Automated IP/IC systems cannot do that today, as it would require a massive management system for the creation of the IP/IC.” This directly relates to the issues we’re currently exploring in my Metadata in Cultural Heritage Institutions class, where we’re discussing the role of subject specialists, catalogers, and crowd-sourced metadata entry. To what extent does RDA, based as it is on FRBR, pointing toward – lurching toward – a totality of cross-referencing? This Microsoft report asserts (above) that this cannot be done and requires a massive system, yet information professionals, primarily from the library field, are trying to tackle it (at least in theory).
3. The authors explicitly privilege “age of information” as a major factor in its usability. Is this always a critical factor? In what kinds of situations would it be critical versus non-important? They also discuss “time to live,” saying “Each document that is created should include a time that it is relevant. Then, upon updating documentation or information, that TTL can be changed.” While this seems to be an obviously relevant protocol for the average office or business system, how do these concepts relate to information professions specifically? I guess I found this reading very… “Office Space” – too dependent on purposely obtuse concepts (and acronyms!) and a bit inaccessible beyond the corporate lexicon.
1. In Mandich’s discussion of tacit vs explicit or connected information, the author seems in favor a connected information system and speaks as if a expert information system relying on tacit information is somehow less evolved. Is some sense, don’t most businesses rely on a combination of systems today? Don’t we rely on “experts” with a wealth of personal knowledge throughout our day to day lives?
2.I’m confused about Mandich’s definition of tacit information in the first place. How can the human memory and the local hard drive of a computer both be tacit information? What point is Mandich trying to make by associating the two?
1.At the beginning of this article, the author introduces the IP capture process between disconnected bits and connected bits, which later will become the original resource in the IP life cycle. However, there is a direct transforming process between two kinds of bits shown in figure 1. So, my question is how could the disconnected bits become connected without IP capture process?
2. The author claims that the value of the source can have a huge impact on the information life cycle, and he summarizes three factors that create the value of any specific source:age, proximity, and source/previous interactions. I agree with his point of view, but I would like to know more about the methods of judging source values. Could we say that the more recent, convinient, previous information is, the more useful it will be?
3. In the IP/IC management, there are three toppling factors that have a great influence on the IP/IC: cost of management, cost of creation, and value of the source. But, I am wondering that if it would be possible to raise a most influencial factor amone them. For example, how much cost will the managers be willing to give is dependent on how valueable do they think of the information resource. So, is value the determinant factor? I don't know....
1. Reading through this article I am very clearly reminded of what a lot of companies I worked for used for their own knowledge bases and the life cycle that they had. Most often the wikis or whatever the company was using would get created to pool everyone's knowledge on a particular subject so that it could be archived and shared with others. At the beginning of this process everything would be great. What would inevitably end up happening is the archived information wouldn't get updated with changes and would get stale and eventually become unusable. This happened at every company I've ever worked for. 2. The value of the source to the end user is something that I'm very familiar with in my personal life and something that is plainly evident in large examples like Wikipedia and other crowd sourced information sources. It seems like every time I have a computer problem I'm almost guaranteed to find search results that I immediately disqualify because of the source of that information. This is also the problem that Wikipedia has solved or is trying to solve by their model of having users source and edit all the information. 3. The idea that the way an organization shepherds information through it's life cycle and how this can negatively affect it's impact is something that I have first hand experience with. Often times, when a knowledge base was created it was created from a direct need and the knowledge it contains was not effectively disseminated to all those who might need it. Later on, after the creation of this knowledge base, other people are made aware of it's existence due to a particular need for some information that is present in the knowledge base. Depending on the time that has passed since the knowledge bases creation and the very real possibility that the information contained within it was not updated, the information has become stale or not accurate. This then provides opportunity for mistakes to be made from the confusion of inaccurate information.
1. In this article, the author defined the information life cycle in another way- how the information is created, how the information is delivered and how the information is managed. Are there any overlapped sections between this information life cycle and the framework of information life cycle in last article (creation, acquisition, cataloging/identification, storage, preservation and access)? 2. According to the author`s statement, relevant to the information to the specific problem is a way to create information and then document them. Is that the only way to create information? Could it cover all the disconnected bits? Can the proximity of information measure whether the information has been well delivered? 3. The author talked a lot the cost in how the information is managed phase, what the cost (challenges or risks) we can find in how the information is created and how the information is delivered phases?
1. In this article, the author defined the information life cycle in another way- how the information is created, how the information is delivered and how the information is managed. Are there any overlapped sections between this information life cycle and the framework of information life cycle in last article (creation, acquisition, cataloging/identification, storage, preservation and access)? 2. According to the author`s statement, relevant to the information to the specific problem is a way to create information and then document them. Is that the only way to create information? Could it cover all the disconnected bits? Can the proximity of information measure whether the information has been well delivered? 3. The author talked a lot the cost in how the information is managed phase, what the cost (challenges or risks) we can find in how the information is created and how the information is delivered phases?
1. The article uses the metaphor of the FDA to illustrate how a system can decrease simplicity and in turn increase IP/IC cost. I am curious though how the author sees this complexity as an oversight when in the case of the FDA it seems as though it is purposeful for preventing bad drugs from entering the market or weeding out bad IP/IC so to speak. I am interested in how such a process could be improved by simplicity while at the same time providing safety to the consumer?
2. At the end of the paper the author presents the question of whether or not information ever dies. I tend to believe that information (even outdated and useless) always has a value, even if its only value is to chart the human progress of a process. For example, it would seem to me to be very important to document information about old surgical techniques for future reference by surgeons who might be researching possibilities for new techniques. Figuring out where that information should live seems to be the hardest question.
3. I was very confused by the concept of information creation as outlined in this paper. Did anyone else find the diagrams in this article completely confusing? Also, the authors of the paper seemed to leave out all printed and visual materials under information deliverables. Is this perhaps because it’s Microsoft?
In this article, the author lists several ways how the information is delivered. It can be one-to-many presentation, white paper, website FAQ, Web site informational and so. However, it seems that the author doesn't mention the efficiency of each way. So which one could be the most efficient way?
Scott assumes the age of information is one of the three factors create the value of any specific source. It is easy to understand the relation between other two factor and the value information. However, does the new information have more value or the old one?
In the first half, Scott mentions a concept that an idea never really disappears. However in the later part, he argues that complexity or blockers applied early in the information life cycle can force good ideas to escape without ever realized. So how could we explain the conflict?
1. I’m somewhat vague on his concept of information proximity. Perhaps we could discuss it in more depth.
2. So is he more or less arguing that organizations need to learn better, or more effective ways, to manage their information capital and resulting information property? Or is he simply emphasizing the importance of doing this? Or am I way off the mark in understanding this paper?
3. His definition of the information life cycle, the process of knowing when to formalize, rationalize and discard information, differs quite a bit from the other two readings. Is the concept he’s putting forth explicit to an organizational setting? It’s founded largely in the idea of information management, but does it encompass other aspects of the cycle (such as creation, storage, etc.) but in more simplified and condensed terms?
1 How explicit knowledge become disconnected bits? When we know a piece of knowledge is outdated, is that another piece of useful information that we did not know before?
2 How could we value the age of the information? The latest is more dependable? What if experience works as a kind of information? Is that the older, the better?
3 How this passage differs information from knowledge that we talked in the class before?
1 How explicit knowledge become disconnected bits? When we know a piece of knowledge is outdated, is that another piece of useful information that we did not know before?
2 How could we value the age of the information? The latest is more dependable? What if experience works as a kind of information? Is that the older, the better?
3 How this passage differs information from knowledge that we talked in the class before?
Andersen and Mandich state quite clearly that knowledge can be input into an expert system. The example using John illustrates how one might document knowledge of a particular subject. However, would the tips or what the article calls “knowledge” be simply information in the hands of others that are not John as the general DIKW hierarchy we read about might suggest?
One of the problems the article points out a user might be faced with in relation to information proximity is that of the inability to use information in its current format. This seems like a problem that occurs when trying to deliver the information in an easily digestible manner. With the tools available to individuals and organizations, which include the ability to present information in different mediums and formats, is this problem less likely to appear moving forward?
In the Cost of Creation section Andersen and Mandich say, “Many organizations rush information from its tacit state to a formalized state much too quickly.” The way the section follow suggests that this is a conscious decision in a way to prevent possibly losing out on what appears to be a valid IC/IP track and the use of management to make the process as efficient as possible. How then do organizations deal with the cost of IP/IC as they attempt to manage the process entirely? It seems like finding an equilibrium takes just as much work as the IC/IP cycle itself and must adapt with each new idea in the pipeline.
1. In the introduction it says that “in all cases, information begins its life in a disconnected, tacit form.” So is it disconnected or tacit? From the way Andersen and Mandich worded things it sounds like tacit information is more of an unknown, secret type of information privy only to a certain person(s) whereas disconnected information sounds like a more practical way of describing information a person has not yet processed and does not know yet, but will.
2. The author says that there are four tiers of the life cycle of information which are relevance, timeliness, ease of reuse, and the impact of the information on the organization. Would you add anymore tiers to the lifecycle of information and what would they be?
3. In the section “Value of the Source to the End User,” highlighted a question I’ve always had. The authors ask “how do you know what another person knows?” Or rather, how do you know your source is reputable and how why should you trust that source, if all it does is use other sources to make their point? Why is any source more reputable than any other?
1. The author gives us the definition of information life cycle in this article. It is a process to change the information form disconnected to connected, form informal to formal. And I think the aim of this process is to make information more useful and effective. However why the author says that ‘managing information through this process becomes an exercise in knowing when to formalize, rationalize, and eventually discard information’? I mean why we should eventually discard information?
2. The author mentions an example of an expert system: ‘John knows how to fix the copier on the third floor; ask him. In a broader case, John works for Company X that makes Copier Y. If you call their help desk and ask for John, he can fix the copier.’ This example shows us how to formalize tacit information. But to some degree, it also tells us the limitation of information formalizing, which will narrow the information’s scope of application. If John doesn’t work in company x, then the information in broader case will be invalid while the first one will still be available. So why should we formalize information?
3. The author states that ‘source of the information, and previous interactions with that specific source’ is a factor creating the value of any specific source. There is an interesting phenomenon in our daily lives. When we need some suggestions about our illness, we tend to see and trust a doctor rather than someone else, because we value the source of information which in this case is the doctor. This happened in hospitals before but now happens on the internet. There are lots of guys claiming they are doctors on the internet and we tend to trust them. But have you ever thought if some of them are not real doctors? So I doubt whether this factor can create the value of any specific source correctly?
1. In this paper the authors put forth an interesting model of the life cycle of information. Upon examining this lifecycle it seems in some ways to be similar to or at least attempt to accomplish the same goals as the DIKW pyramid that we discussed in previous articles. Is this model of the lifecycle of information a substitute for the DIKW definition of information and if so how does it stand up against that model? 2. In this article the authors state that there are three methods in which information is created. They are systematic, environmental, and trial-and-error (or ad-hoc). Are these the only methods in which information can be created or are there other methods in which information can come to be? 3. When discussing the cost of lost time of lost time for an organization in the creation of information the author mentions that there are two processes that can be seen in creating information, the light and heavy processes. He uses a medical example to explain these processes. The light process is like a medical journal that allows a surgical technique to be disseminated quickly versus the heavy process that is similar to the FDA that takes a long time to approve a drug, potentially causing deaths due to inaction. However is it not the case that a heavy process can often lead to greater rewards because it can guarantee the usefulness of the information created? To use the author’s example isn’t the FDA a better method because it can usually ensure that the method is not too dangerous?
1. In the Introduction, Andersen and Mundich assume that "the impact on the organization is controlled externally of the information life cycle....and cannot be controlled by the organization." What is meant by this comment? Wouldn't the rules and regulations of the organization and the way it managed the information control the impact?
2. The authors, in their discussion of the Cost of Creation, mention adding additional buffers which adds to the complexity of the life cycle. Why would they be used, especially if they could cause ideas to be lost? What would those buffers be? Give some examples.
3. Andersen and Mundich talk about metadata in the section Value of the Source to the End User, complaining that it is cost-prohibitive but also that it increases proximity. As Evans and others have suggested, wouldn't it reduce the cost if included at the time of creation. Or could the cost be passed on to the end user?
1. What makes reuse so important in the IP creation process? I feel like there's a whole lot of information that exits in digital forms that could be/is limited in use, especially given how expansive digital mediums are. Are there any consequences to under used IP systems? 2. In figure 4, the author demonstrates information management within an organization. In what kind of organization is this relevant? Is this kind of framework and model best suited for institutions that work mainly with information? What about institutions that work primarily with people? 3. I find the idea of TTL and giving information a lifespan interesting. What is the benefit in controlling the age of information? And even more, thinking back to the DIKW framework, how does the age of information relate to ideas of wisdom?
1. When Anderson introduced the idea of intellectual capital, it reminded me of the concept of information silos, where information within an organization is not communicated between departments efficiently. This paper discusses how crucial it is for information to flow efficiently in organizations in order to get the information to the user at the point of need. Even within organizations that have “evolved” past the expert system and document the information they create there still seems to be instances where the information is not easily located. How can organizations manage the creation of information and appropriate use of metadata to make this system function smoothly? Is there a need to have organizers of information within organizations or is this something that will remain un-centralized and unstandardized as with most organizations?
2. I was very interested in the preservation of tacit information. Within many of the organizations I have worked for I found that there are experts who know the local environment and historical information about the organization that have shaped the way the organization functions. Is it ever possible for organizations to document this information or preserve it? When I think of documenting this information, I wonder if the information can ever be removed and still be meaningful?
3. While the question “Does information die?” was out of the scope of this paper, it discussed the information life cycle of information including destruction. It again reminded me of the expert within an organization who has a great deal of tacit information that then departs the organization. It seems that this information may not die but stay within a limbo to be re-discovered as needed. If we think of the concept of information as objects, the information should be retained somewhere in the environment waiting to be discovered and interpreted by someone new. I guess the only time it would die in this scenario would be when the environment changes significantly enough to no longer be interpreted in the same context as intended.
1. I was a little confused about the concept of "tacit" information. Conceptually, I understand the difference between tacit and explicit information. However, the author lists both information stored in one's head and information stored on one's hard drive as tacit information. To me, these two are fundamentally different--in ways they are stored, accessed, implemented, etc. The one similarity I see is that they are both inaccessible to other users in a system or network. Is this similarity enough to link them together?
2. For small organizations, I understand how tacit information or tacit knowledge would be useful. For large institutions, though, wouldn't it make more economic sense to centralize this type of information and make it accessible to users at other points on the network/system?
3. What happens to information that is dumped from a system after it's considered no longer applicable to the solution required? In the more philosophic channel of our first few class meetings, does it cease to be information because it no longer has value?
1. In the introduction, the authors bring up the concept of impact: “An assumption of this paper is that the impact on the organization is controlled externally of the information life cycle—and, for that matter, of the organization itself—and cannot be controlled by the organization. As such, we assume that this component of the life cycle changes slowly over time, giving us plenty of warning when processes and procedures must be altered to adapt to the new regulations.” How is the impact related to the lifecycle?
2. “So, we look for a balance in the creation and management processes. By seeking balance, we in effect create a layering process within the organization around the concept of IC/IP. By creating this layering, we balance the creation of IP/IC against its relevance and the management requirements.” What is this layering process?
3. In discussing factors that affect the cost of IP/IC creation, the authors mention complexity within a system that interacts with the IP/IC. They say, “An example of the impact of complexity would be ‘the one that got away.’ An idea and the implementation concept behind the idea can often be fleeting. Complexity or blockers applied early in the information life cycle can force good ideas to escape without ever being realized, or not realized in time.” What are some examples of the blockers they mention that can ruin these ideas early on?
1. In the beginning of this article, Andersen and Mandich offer us a summary that reads "Organizations must get the right information to the right person at the time required to make the right decision" (p. 1). But if the idea or information is still around, but hasn't been received by the right person at the right time, does it then become superfluous? The article makes it sound like if you don't get the timing right, then your idea or knowledge is basically useless. Is there another time in the Information Life Cycle when ideas/knowledge could be introduced?
ReplyDelete2. Having read quite a few articles on the theoretical DIK hierarchy, it is interesting to see that in this article, information is created from knowledge. Someone has tacit knowledge which is then (if the timing is right) transformed into an expert system or explicit knowledge. Andersen and Mandich's article focuses on information management, but how does that differ from knowledge management? Is there an argument of information vs. knowledge management?
3. On page two of the article, the authors briefly mention three different steps of the information life cycle - to formalize, rationalize, and discard - that are important to information management. I feel that they addressed the issue of formalizing and rationalizing information, but didn't place the same emphasis on the last step. Is it that people don't like to talk about discarding information? Is it somewhat of a taboo subject in information management, like the place where information and ideas go to die?
3. The author says that "Yesterday's discarded idea might be the IP/IC that saves the company tomorrow!" I think for discarding the information, he doesn't mean destroy the information or throw it away, but to push the information into less critical systems when it is no longer relevant.
Delete1. Disconnected bits to connected bits, what about skills or know-how information. The skills of how to conduct the brain surgery is gained by years of experience training and different from the information documented on a website or a book. So the information changes a different sort of information when moving up to different stages of the life cycle, right?
ReplyDelete2. I find this article very useful to understand the knowledge management in large corporation. I work for JCPenney, a very large company with thousands of employees in various departments. As a user experience designer, I have to take account of different stakeholders when conducting the project. But sometimes I find it very hard to find the right contact person to ask his needs for a certain problem. That person definitely has an idea what he wants, i.e the tacit knowledge. But since we cannot meet and talk, his tacit knowledge and my tacit knowledge cannot evolve into the explicit knowledge. Thus we cannot find the proper solution.
1 - I'm curious as to how information professionals can disrupt, or improve the flow of information within a company. Several UT graduates have gone on to corporate librarianship positions that were recently created within a company, so how does a new employee start from the ground up when it comes to consolidating expert knowledge, or becoming a trusted resource for information? For example, is it more efficient to just say "John in IT has the answer to that" or does the IR professional tend to provide more comprehensive answers? Are there any people who have worked in corporate information/knowledge management settings that could speak to this?
ReplyDelete2 - How can organizations better recognize who has useful or valuable knowledge to share? When I was working at Levi-Strauss, they were in the process of trying to create a "Yes, but how?" culture - that is, a culture focused on saying yes to new ideas to improve efficiency. However, I found that lower-level employees were often not taken seriously, even when their positions were full of inefficient processes that could have been streamlined, but if the same critique came from say, their manager, people instantly paid attention and worked to fix the problem. How can we determine "expertise" in a business environment so that title does not become the only source of a person's value?
In regards to your second question, I think there is a difference here between ideas and knowledge. If your company wants ideas for new concepts, then ideas could come from where ever. However some information or knowledge I would only trust if I knew the title of the person or the department it came from because of their specialization in that subject matter. I wonder in knowledge management systems how the authority of the writer is captured or transmitted with the information.
DeleteIn my experience though, companies seem to value the knowledge of someone who has been at the company longer than someone new, even though the new person may have had more experience at other companies or has better training. This plays into your first question that I think it is difficult for a new information professional in a company to gain the trust of others to become a standard resource for those in the company looking for information. Whether it is starting a library/archive/knowledge management system in a company, I think it just takes time for others to see the value, and this can only be built by providing good service.
1. Andersen and Mandich assume that the impact on the organization is controlled externally by the information life cycle, which has stages, where information is created, delivered and managed. Can we add one more stage to this life cycle, where information is curated? Will this make any changes in the assumption?
ReplyDelete2. The author says that when two humans are communicating,there is an infinite exchange of metadata to find,communicate,contextualize and consume the information. In order to mimic this in IP/IC systems, a layering process is suggested. What if the metadata processing is inefficient in these layers and leads to more imbalance in the creation and management processes?
3. While assigning value to a source, the important factor that must be considered is the proximity of the source.There arises a concept of TTL(Time To Live) which must be assigned and changed depending on each improvement of the document. This concept can give us information on the accessibility of the source, but in what way does it convey about the location of the source?
1. In Mandich’s introduction, he discusses the difference between “tacit” and “explicit/connected” information, giving an example of tacit information (and an expert system) by saying that it “might be locked in the head of senior employees who have been completing a specific task for many years”. Mandich seems to dislike this expert information system in favor of connected information, as he talks about companies who have “evolved past the expert system” and claims that “organizations with excessively formal IP-creation processes [such as expert systems] often tend to have a longer gestation period for their IP”. However, is the expert system necessarily a bad/inefficient thing? I can see the proximity issues it might create, but if you’re talking to one person who for sure knows the information you need, isn’t this better than having to search for it?
ReplyDelete2. Mandich also discusses the “cost of creation” of information in his article. He describes (and gives diagrams of) two different models of the information life cycle that organizations can use – one with and one without “buffers”. One limitation he gives for the buffer model is that “an idea and the implementation concept behind the idea can often be fleeting. Complexity or blockers applied early in the information life cycle can force good ideas to escape without ever being realized, or not realized in time”. While this is certainly true, wouldn’t the blockers also keep bad ideas from being realized too quickly? Is it better to have a good idea realized “in time” or to keep a bad idea (which could potentially damage an organization) from being enacted?
3. In the “Complexity versus Management” section of his article, Mandich describes what happens to information that “loses relevance” and drops into the lower tiers of system, becoming “unmanaged”. While unmanaged information may certainly age, Mandich asks the question “does it ever die?” I would like to attempt to answer this question by arguing that it doesn’t. When I first read this, I immediately thought of the Yahoo message boards that date back 10 years but are still “up and running”. Also, just simple facts such as how we used to believe the world was flat. Obviously, the world isn’t flat. That information is no longer relevant, but it isn’t dead either. However, does that just fall in to the “archived” information category that isn’t “used” but is still important? It could be argued that “dead” information is forgotten, but does that truly mean it died? I feel that forgotten information could still be in the realm of information that hasn’t been learned yet. It’s there – we just don’t know it.
3 - While I would agree with you that information doesn't technically die, there's also a bit of a spectrum at play in terms of information's relevance or validity/truthfulness. I think the difficulty in answering the "does information ever die?" question is that it eventually becomes rhetorical, much like the tree falling in the forest. We could ostensibly argue that all information is already there and we just "don't know it" yet - cures for diseases, mapping specific genes, plant species, all of this information is already available but we haven't necessarily discovered it. I think the important point Mandich could be making is that information that drops out of the sphere (extinct languages, "the world is flat" "imbalanced humors cause depression") has been deemed by society to be no longer relevant or useful, so the information itself is 'dead' in terms of relevance, but our awareness that it at one time existed and was accepted as factual information isn't. If I eat an apple, the apple ceases to be physically present, but my memory that yes, I ate an apple yesterday, remains.
DeleteFrom a practical standpoint I think information that loses relevance for the here and now becomes contextual information that permits us to build on newer information, hence the "does it ever die?" question of Mandich's. Following the "world is flat" example - we've already tested and disproven this theory, so 1) we won't test it again and 2) we can use the fact that the Earth is spherical to help us understand and investigate additional planets - we won't be expecting to find a rectangular planet in our solar system. It's not so much that the "Earth is flat" is specifically relevant information, but memory of the hypothesis' failings are, if that distinction makes sense.
1. I also questioned Mandich's dismissal of tacit information systems. His description of an expert information system reminded me of employment trends spawned by industrialization; jobs took less and less general knowledge and more very specific expertise. More cogs were needed to run the machine. So maybe Mandich is picturing an early 20th century system of expert information? Oy vay. I think the more company knowledge we can make explicit through wikis, etc. the better but that we will always turn to specific individuals for their expertise in certain areas that can only be learned through experience.
Delete1) The article alludes to but ultimately skirts the debate over whether information ever dies. I think this is an interesting question worth exploring. From a historiographic perspective I would argue that it is valuable to know what was known or believed at a given time, even if that information no longer has day-to-day applications. However, the article seems to come from a more corporate perspective, which may not find the same value in this information. Is there a right or wrong answer to the question of whether information can die?
ReplyDelete2) This article seems to take a different view of what constitutes “tacit information” than previous readings have. For instance, it lists information on a computer’s local hard drive as “tacit information,” whereas past readings have characterized tacit information as unarticulated and taken-for-granted information internal to an individual. If the information has been articulated outside of the individual and is stored in a relatively easily-retrievable format, is it “tacit” or just disorganized?
3) The “heavy process” method of dealing with complex information seems to be criticized here, with the example of the slow FDA drug approval process possibly costing lives. What the authors don’t seem to consider, though, is the fact that the FDA approval process (while cumbersome) is designed to prevent insufficiently-tested medicines from causing harm to patients. How would the authors suggest mediating the needs for speed and precision when processing information when both of these considerations can be literally a matter of life and death?
1. Andersen refers to knowledge held by a single person within an organization as “tacit knowledge.” This knowledge must then be made explicit and formalized to create genuine intellectual capital. However, many individuals within an organization may be inclined to hoard their individual capital and avoid formalization to retain their value to their organization. How can fears of individual deprecation be alleviated to ease formalization?
ReplyDelete2. Andersen cites the FDA as an example of an overly-long IP/IC creation process whose long approval process can cost lives. Meanwhile, the FDA frequently faces accusations of regulatory capture and even that long approval process is fraught with methodological arguments in drug studies. The FDA is an external IP creator meant as a gatekeeper for private actors with a profit motive in rapid approval. How can the FDA better combat methodological challenges to accurate creation?
3. Documentation of processes has proven to be an efficiency booster for many organizations, and often a necessary step as agencies expand and formats must update and change. At times, an external consulting firm may be brought in to assist in documentation processes. Does this smooth the IP creation process, or does the monetization of the process result in an elongated cycle?
This comment has been removed by the author.
Delete1. Easing the fear of being made obsolete would be hard to do in an organization (and perhaps industry) that treated its employees like single-use printer cartridges - as if you are no longer valuable once the information is out of your head. I have a friend who once interned at a major software company in town, and he wrote software to do the work that he was hired to do. In one sense, he made himself obsolete. With that in mind, which organization has more hope of improving its management of information? An organization that would downgrade his value upon discovering that this employee automated his own job? Or an organization that would give this employee the chance to work on more interesting and tougher projects because he has proven himself to be valuable in other ways than the company originally recognized?
Delete1) In this passage "Information that is no longer relevant can be quickly pushed out of the critical systems and into less critical systems, pending their eventual retirement... two sides of the same coin, then, are light process (publication in a journal), which quickly moves like a life-saving surgical technique to a larger population, and heavy process (FDA), which might drag on for years before allowing a life-saving drug to emerge." Can it be argued that information with long-term/long-reaching effects for a person warrant taking a longer time to verify and validate? A surgical technique's greatest impact on a person is during surgery, but a drug that person takes to combat a long-term condition like arthritis makes a great impact on a regular basis.
ReplyDelete2) There is a lot of talk of the value of a source being the 'age of the information'. Is there a way to quantify how old information can lose value in one field but potentially gain equal or greater value in another? Has there ever been a focus on how information's importance does not necessarily erode but instead shifts? Where would archives fall in the information lifecycle depicted?
3) The very beginning states "the impact of an organization is controlled externally of the information life cycle -- and, for that matter, of the organization itself -- and cannot be controlled by the organization." How would organizations such as subscription databases, whose purpose and business is the controlled access to information, fall into this mode of thinking? Do they not by their very nature impact the information life cycle and control it?
To your question #2, I asked a similar question (below), and I've been wondering about this throughout the class, as we discuss "useless data," "old information," and even "incorrect information." All data has potential value, even incorrect data (it might, for instance, help historians understand category mistakes, technological glitches, or dogmatic assumptions made by researchers of prior generations, which could be historically important). I agree that pinpointing "where archives fall" within this paradigm is a vital question, and I hope we explore it more in class. This sounds like a great topic for a (perhaps rather theoretical) paper.
Delete1. The author defined two types of information created and they are tacit and explicit information. Regarding tacit information, the author held three examples and there are Human memory, Local hard drive of the computer and Expert system. From my understanding, the author concluded those information not widespread as tacit information. For such a type of information, does it suffer high risk of disappearing before it comes to next step of information life cycle? Or does it mean tacit information is more vulnerable than explicit information?
ReplyDelete2. I used to work in a large firm in China and I met so many things that clog the information flow within the company. For example, we created documents of process to help new co-workers get on right track as soon as possible, while paperwork overload undermined work efficiency. We could focus on thinking and dealing with more important work than recording what we have done at the same time. In this point, how should we value and make a choice between the jobs that improve efficiency and work that streamline information flow?
3. To outline the dissipation of information, the author compared it with throwing a baseball, by which the author tried to conclude that how far the information that a user requires is from what they need determines the proximity and timeliness of that information. It might be true in applying to a simple model in which only one information receiver and one thrower are involved. However, could this example cannot apply to a net like model in which each point on the net could be a receiver and a thrower as well? The case pushed me into such a question is that even a small rumor could result in a storm on Twitter.
“Organizations must get the right information to the right person at the right time to make the right decision.” I would replace the word, “right”, with the word, “best”. Sometimes no information is right or there is no information at all. Sometimes designing information delivery to the “right” person is not as good as getting information to the “best” person or people. And are there always right and wrong decisions?
ReplyDelete“If you doubled the distance, the pitcher would not be as effective and would have to leverage a completely different type of pitch.” - It seems like archivists are pitchers who get called in when the distance to from the mound to home plate changes - which seems to be all the time.
1. At the very beginning of the article, the author introduced the concept of “tacit information”. I agree that knowledge in an expert system is tacit, but I do not think information begins its life in a tacit form in all cases. For example, the number of a company’s inventory is new information that is not tacit. Since all following discussion on information life cycle started with disconnected information, I guess the author meant the knowledge, which begins its life in expert’s head, is tacit and disconnected.
ReplyDelete2. In Figure 4, the author attempted to discuss the cost of IP/IC management but did not demonstrate clearly on the relationship of cost of management and the two factors. If the goal of IP/IC management is moving information from lowest to the highest levels, the Explicit Knowledge seems to require less management than the Tacit Knowledge. If so, the cost of management of Explicit Knowledge is less than Tacit Knowledge, right? If my understanding is correct, I wonder what role does “Required or mandated IP/IC process” play. In the figure, “Required or mandated IP/IC process” is the reason why knowledge became explicit or tacit. Could the cause and effect be two independent factors?
3. In Figure 7, this article discussed the management and complexity again and talked more about another factor: Relevance. On the one hand, the author said the management requirements for information would increase if the relevance of information increases. I think the author should add a premise of the unchanged complexity. The first sentence had the same problem. On the other hand, I could hardly understand the curves of ”low relevant” and “high relevant”. Why do the curves of Value infinitely close to the curves of Relevant? Does it mean the relevance is increasing or declining? If so, the curve showed that when relevance increases, complexity would increase. However, it is not true. I have no idea how to fix this figure but I do think it has some problems.
1. The authors write “Many organizations rush information from its tacit state to a formalized state much too quickly. The reverse is also true with organizations that slow the IP/IC creation process?” However they do not provide examples of this in organization. What is an example to information moving from tacit to formalized? Is this a bad thing?
ReplyDelete2. How does time restriction or limitations on users or creators of information in an organization affect the IP/IC processes that Andersen and Mandich describe? They write of the costs, and time could be considered a cost, but they dont talk about how employee time restraints could affect the consuming information process or in validating a source.
1. An element that seems to be missing from creating an effective information management system is promotion and education. It takes time to add the meta data, key words, time of life, time of expiration, creator and refresher, etc. Unless employees see the value to adding this information, then managing the information within the knowledge-management system will never truly be successful. Like Anderson says in his biography - "They felt that the taxonomy and other requirements were too stiff and rigorous. Yet, on the other side, the same people complained that the search was ineffective." Essentially, they want the benefit without the responsibility of the work.
ReplyDelete2. Anderson lists impact of the information on the organization as a tier in the life cycle, yet doesn't go into detail when discussing the tiers. If this area was stressed more, perhaps it would influence change. For example, if it was a job requirement of all employees to document their ideas and idea development, then maybe they would spend time and energy developing a system that expedites this documentation. Perhaps some items could be worked into the system to be auto-generated through machine login and other means.
3. In our previous articles we have discussed a difference between information and knowledge. Here the authors use the term interchangeably. Is there a difference in process and life-cycle for information virus knowledge? Or can the same process be used for both? How would the process be different for knowledge, are their elements or steps let out?
1. Intellectual Capital and tacit knowledge are clearly closely related according to the theoretical framework of this white paper. Are these terms identical, synonymous, overlapping, or merely related? Are their definitions situational? How would we import these terms into our specific fields (libraries, archives, IT, etc.?). Thinking about how these terms are used herein, have they, in our experience, pointed to identical concepts? I’m thinking about actual work experience (regardless of field).
ReplyDelete2. In the section titled “How Is Information Delivered?” the authors say: “Essentially, when two humans are communicating, there is a fluid and almost infinite exchange of metadata to find, communicate, contextualize, and consume the information. Automated IP/IC systems cannot do that today, as it would require a massive management system for the creation of the IP/IC.” This directly relates to the issues we’re currently exploring in my Metadata in Cultural Heritage Institutions class, where we’re discussing the role of subject specialists, catalogers, and crowd-sourced metadata entry. To what extent does RDA, based as it is on FRBR, pointing toward – lurching toward – a totality of cross-referencing? This Microsoft report asserts (above) that this cannot be done and requires a massive system, yet information professionals, primarily from the library field, are trying to tackle it (at least in theory).
3. The authors explicitly privilege “age of information” as a major factor in its usability. Is this always a critical factor? In what kinds of situations would it be critical versus non-important? They also discuss “time to live,” saying “Each document that is created should include a time that it is relevant. Then, upon updating documentation or information, that TTL can be changed.” While this seems to be an obviously relevant protocol for the average office or business system, how do these concepts relate to information professions specifically? I guess I found this reading very… “Office Space” – too dependent on purposely obtuse concepts (and acronyms!) and a bit inaccessible beyond the corporate lexicon.
1. In Mandich’s discussion of tacit vs explicit or connected information, the author seems in favor a connected information system and speaks as if a expert information system relying on tacit information is somehow less evolved. Is some sense, don’t most businesses rely on a combination of systems today? Don’t we rely on “experts” with a wealth of personal knowledge throughout our day to day lives?
ReplyDelete2.I’m confused about Mandich’s definition of tacit information in the first place. How can the human memory and the local hard drive of a computer both be tacit information? What point is Mandich trying to make by associating the two?
1.At the beginning of this article, the author introduces the IP capture process between disconnected bits and connected bits, which later will become the original resource in the IP life cycle. However, there is a direct transforming process between two kinds of bits shown in figure 1. So, my question is how could the disconnected bits become connected without IP capture process?
ReplyDelete2. The author claims that the value of the source can have a huge impact on the information life cycle, and he summarizes three factors that create the value of any specific source:age, proximity, and source/previous interactions. I agree with his point of view, but I would like to know more about the methods of judging source values. Could we say that the more recent, convinient, previous information is, the more useful it will be?
3. In the IP/IC management, there are three toppling factors that have a great influence on the IP/IC: cost of management, cost of creation, and value of the source. But, I am wondering that if it would be possible to raise a most influencial factor amone them. For example, how much cost will the managers be willing to give is dependent on how valueable do they think of the information resource. So, is value the determinant factor? I don't know....
1. Reading through this article I am very clearly reminded of what a lot of companies I worked for used for their own knowledge bases and the life cycle that they had. Most often the wikis or whatever the company was using would get created to pool everyone's knowledge on a particular subject so that it could be archived and shared with others. At the beginning of this process everything would be great. What would inevitably end up happening is the archived information wouldn't get updated with changes and would get stale and eventually become unusable. This happened at every company I've ever worked for.
ReplyDelete2. The value of the source to the end user is something that I'm very familiar with in my personal life and something that is plainly evident in large examples like Wikipedia and other crowd sourced information sources. It seems like every time I have a computer problem I'm almost guaranteed to find search results that I immediately disqualify because of the source of that information. This is also the problem that Wikipedia has solved or is trying to solve by their model of having users source and edit all the information.
3. The idea that the way an organization shepherds information through it's life cycle and how this can negatively affect it's impact is something that I have first hand experience with. Often times, when a knowledge base was created it was created from a direct need and the knowledge it contains was not effectively disseminated to all those who might need it. Later on, after the creation of this knowledge base, other people are made aware of it's existence due to a particular need for some information that is present in the knowledge base. Depending on the time that has passed since the knowledge bases creation and the very real possibility that the information contained within it was not updated, the information has become stale or not accurate. This then provides opportunity for mistakes to be made from the confusion of inaccurate information.
1. In this article, the author defined the information life cycle in another way- how the information is created, how the information is delivered and how the information is managed. Are there any overlapped sections between this information life cycle and the framework of information life cycle in last article (creation, acquisition, cataloging/identification, storage, preservation and access)?
ReplyDelete2. According to the author`s statement, relevant to the information to the specific problem is a way to create information and then document them. Is that the only way to create information? Could it cover all the disconnected bits? Can the proximity of information measure whether the information has been well delivered?
3. The author talked a lot the cost in how the information is managed phase, what the cost (challenges or risks) we can find in how the information is created and how the information is delivered phases?
1. In this article, the author defined the information life cycle in another way- how the information is created, how the information is delivered and how the information is managed. Are there any overlapped sections between this information life cycle and the framework of information life cycle in last article (creation, acquisition, cataloging/identification, storage, preservation and access)?
ReplyDelete2. According to the author`s statement, relevant to the information to the specific problem is a way to create information and then document them. Is that the only way to create information? Could it cover all the disconnected bits? Can the proximity of information measure whether the information has been well delivered?
3. The author talked a lot the cost in how the information is managed phase, what the cost (challenges or risks) we can find in how the information is created and how the information is delivered phases?
1. The article uses the metaphor of the FDA to illustrate how a system can decrease simplicity and in turn increase IP/IC cost. I am curious though how the author sees this complexity as an oversight when in the case of the FDA it seems as though it is purposeful for preventing bad drugs from entering the market or weeding out bad IP/IC so to speak. I am interested in how such a process could be improved by simplicity while at the same time providing safety to the consumer?
ReplyDelete2. At the end of the paper the author presents the question of whether or not information ever dies. I tend to believe that information (even outdated and useless) always has a value, even if its only value is to chart the human progress of a process. For example, it would seem to me to be very important to document information about old surgical techniques for future reference by surgeons who might be researching possibilities for new techniques. Figuring out where that information should live seems to be the hardest question.
3. I was very confused by the concept of information creation as outlined in this paper. Did anyone else find the diagrams in this article completely confusing? Also, the authors of the paper seemed to leave out all printed and visual materials under information deliverables. Is this perhaps because it’s Microsoft?
In this article, the author lists several ways how the information is delivered. It can be one-to-many presentation, white paper, website FAQ, Web site informational and so. However, it seems that the author doesn't mention the efficiency of each way. So which one could be the most efficient way?
ReplyDeleteScott assumes the age of information is one of the three factors create the value of any specific source. It is easy to understand the relation between other two factor and the value information. However, does the new information have more value or the old one?
In the first half, Scott mentions a concept that an idea never really disappears. However in the later part, he argues that complexity or blockers applied early in the information life cycle can force good ideas to escape without ever realized. So how could we explain the conflict?
1. I’m somewhat vague on his concept of information proximity. Perhaps we could discuss it in more depth.
ReplyDelete2. So is he more or less arguing that organizations need to learn better, or more effective ways, to manage their information capital and resulting information property? Or is he simply emphasizing the importance of doing this? Or am I way off the mark in understanding this paper?
3. His definition of the information life cycle, the process of knowing when to formalize, rationalize and discard information, differs quite a bit from the other two readings. Is the concept he’s putting forth explicit to an organizational setting? It’s founded largely in the idea of information management, but does it encompass other aspects of the cycle (such as creation, storage, etc.) but in more simplified and condensed terms?
1 How explicit knowledge become disconnected bits? When we know a piece of knowledge is outdated, is that another piece of useful information that we did not know before?
ReplyDelete2 How could we value the age of the information? The latest is more dependable? What if experience works as a kind of information? Is that the older, the better?
3 How this passage differs information from knowledge that we talked in the class before?
1 How explicit knowledge become disconnected bits? When we know a piece of knowledge is outdated, is that another piece of useful information that we did not know before?
ReplyDelete2 How could we value the age of the information? The latest is more dependable? What if experience works as a kind of information? Is that the older, the better?
3 How this passage differs information from knowledge that we talked in the class before?
Andersen and Mandich state quite clearly that knowledge can be input into an expert system. The example using John illustrates how one might document knowledge of a particular subject. However, would the tips or what the article calls “knowledge” be simply information in the hands of others that are not John as the general DIKW hierarchy we read about might suggest?
ReplyDeleteOne of the problems the article points out a user might be faced with in relation to information proximity is that of the inability to use information in its current format. This seems like a problem that occurs when trying to deliver the information in an easily digestible manner. With the tools available to individuals and organizations, which include the ability to present information in different mediums and formats, is this problem less likely to appear moving forward?
In the Cost of Creation section Andersen and Mandich say, “Many organizations rush information from its tacit state to a formalized state much too quickly.” The way the section follow suggests that this is a conscious decision in a way to prevent possibly losing out on what appears to be a valid IC/IP track and the use of management to make the process as efficient as possible. How then do organizations deal with the cost of IP/IC as they attempt to manage the process entirely? It seems like finding an equilibrium takes just as much work as the IC/IP cycle itself and must adapt with each new idea in the pipeline.
1. In the introduction it says that “in all cases, information begins its life in a disconnected, tacit form.” So is it disconnected or tacit? From the way Andersen and Mandich worded things it sounds like tacit information is more of an unknown, secret type of information privy only to a certain person(s) whereas disconnected information sounds like a more practical way of describing information a person has not yet processed and does not know yet, but will.
ReplyDelete2. The author says that there are four tiers of the life cycle of information which are relevance, timeliness, ease of reuse, and the impact of the information on the organization. Would you add anymore tiers to the lifecycle of information and what would they be?
3. In the section “Value of the Source to the End User,” highlighted a question I’ve always had. The authors ask “how do you know what another person knows?” Or rather, how do you know your source is reputable and how why should you trust that source, if all it does is use other sources to make their point? Why is any source more reputable than any other?
1. The author gives us the definition of information life cycle in this article. It is a process to change the information form disconnected to connected, form informal to formal. And I think the aim of this process is to make information more useful and effective. However why the author says that ‘managing information through this process becomes an exercise in knowing when to formalize, rationalize, and eventually discard information’? I mean why we should eventually discard information?
ReplyDelete2. The author mentions an example of an expert system: ‘John knows how to fix the copier on the third floor; ask him. In a broader case, John works for Company X that makes Copier Y. If you call their help desk and ask for John, he can fix the copier.’ This example shows us how to formalize tacit information. But to some degree, it also tells us the limitation of information formalizing, which will narrow the information’s scope of application. If John doesn’t work in company x, then the information in broader case will be invalid while the first one will still be available. So why should we formalize information?
3. The author states that ‘source of the information, and previous interactions with that specific source’ is a factor creating the value of any specific source. There is an interesting phenomenon in our daily lives. When we need some suggestions about our illness, we tend to see and trust a doctor rather than someone else, because we value the source of information which in this case is the doctor. This happened in hospitals before but now happens on the internet. There are lots of guys claiming they are doctors on the internet and we tend to trust them. But have you ever thought if some of them are not real doctors? So I doubt whether this factor can create the value of any specific source correctly?
1. In this paper the authors put forth an interesting model of the life cycle of information. Upon examining this lifecycle it seems in some ways to be similar to or at least attempt to accomplish the same goals as the DIKW pyramid that we discussed in previous articles. Is this model of the lifecycle of information a substitute for the DIKW definition of information and if so how does it stand up against that model?
ReplyDelete2. In this article the authors state that there are three methods in which information is created. They are systematic, environmental, and trial-and-error (or ad-hoc). Are these the only methods in which information can be created or are there other methods in which information can come to be?
3. When discussing the cost of lost time of lost time for an organization in the creation of information the author mentions that there are two processes that can be seen in creating information, the light and heavy processes. He uses a medical example to explain these processes. The light process is like a medical journal that allows a surgical technique to be disseminated quickly versus the heavy process that is similar to the FDA that takes a long time to approve a drug, potentially causing deaths due to inaction. However is it not the case that a heavy process can often lead to greater rewards because it can guarantee the usefulness of the information created? To use the author’s example isn’t the FDA a better method because it can usually ensure that the method is not too dangerous?
1. In the Introduction, Andersen and Mundich assume that "the impact on the organization is controlled externally of the information life cycle....and cannot be controlled by the organization." What is meant by this comment? Wouldn't the rules and regulations of the organization and the way it managed the information control the impact?
ReplyDelete2. The authors, in their discussion of the Cost of Creation, mention adding additional buffers which adds to the complexity of the life cycle. Why would they be used, especially if they could cause ideas to be lost? What would those buffers be? Give some examples.
3. Andersen and Mundich talk about metadata in the section Value of the Source to the End User, complaining that it is cost-prohibitive but also that it increases proximity. As Evans and others have suggested, wouldn't it reduce the cost if included at the time of creation. Or could the cost be passed on to the end user?
1. What makes reuse so important in the IP creation process? I feel like there's a whole lot of information that exits in digital forms that could be/is limited in use, especially given how expansive digital mediums are. Are there any consequences to under used IP systems?
ReplyDelete2. In figure 4, the author demonstrates information management within an organization. In what kind of organization is this relevant? Is this kind of framework and model best suited for institutions that work mainly with information? What about institutions that work primarily with people?
3. I find the idea of TTL and giving information a lifespan interesting. What is the benefit in controlling the age of information? And even more, thinking back to the DIKW framework, how does the age of information relate to ideas of wisdom?
1. When Anderson introduced the idea of intellectual capital, it reminded me of the concept of information silos, where information within an organization is not communicated between departments efficiently. This paper discusses how crucial it is for information to flow efficiently in organizations in order to get the information to the user at the point of need. Even within organizations that have “evolved” past the expert system and document the information they create there still seems to be instances where the information is not easily located. How can organizations manage the creation of information and appropriate use of metadata to make this system function smoothly? Is there a need to have organizers of information within organizations or is this something that will remain un-centralized and unstandardized as with most organizations?
ReplyDelete2. I was very interested in the preservation of tacit information. Within many of the organizations I have worked for I found that there are experts who know the local environment and historical information about the organization that have shaped the way the organization functions. Is it ever possible for organizations to document this information or preserve it? When I think of documenting this information, I wonder if the information can ever be removed and still be meaningful?
3. While the question “Does information die?” was out of the scope of this paper, it discussed the information life cycle of information including destruction. It again reminded me of the expert within an organization who has a great deal of tacit information that then departs the organization. It seems that this information may not die but stay within a limbo to be re-discovered as needed. If we think of the concept of information as objects, the information should be retained somewhere in the environment waiting to be discovered and interpreted by someone new. I guess the only time it would die in this scenario would be when the environment changes significantly enough to no longer be interpreted in the same context as intended.
1. I was a little confused about the concept of "tacit" information. Conceptually, I understand the difference between tacit and explicit information. However, the author lists both information stored in one's head and information stored on one's hard drive as tacit information. To me, these two are fundamentally different--in ways they are stored, accessed, implemented, etc. The one similarity I see is that they are both inaccessible to other users in a system or network. Is this similarity enough to link them together?
ReplyDelete2. For small organizations, I understand how tacit information or tacit knowledge would be useful. For large institutions, though, wouldn't it make more economic sense to centralize this type of information and make it accessible to users at other points on the network/system?
3. What happens to information that is dumped from a system after it's considered no longer applicable to the solution required? In the more philosophic channel of our first few class meetings, does it cease to be information because it no longer has value?
1. In the introduction, the authors bring up the concept of impact: “An assumption of this paper is that the impact on the organization is controlled externally of the information life cycle—and, for that matter, of the organization itself—and cannot be controlled by the organization. As such, we assume that this component of the life cycle changes slowly over time, giving us plenty of warning when processes and procedures must be altered to adapt to the new regulations.” How is the impact related to the lifecycle?
ReplyDelete2. “So, we look for a balance in the creation and management processes. By seeking balance, we in effect create a layering process within the organization around the concept of IC/IP. By creating this layering, we balance the creation of IP/IC against its relevance and the management requirements.” What is this layering process?
3. In discussing factors that affect the cost of IP/IC creation, the authors mention complexity within a system that interacts with the IP/IC. They say, “An example of the impact of complexity would be ‘the one that got away.’ An idea and the implementation concept behind the idea can often be fleeting. Complexity or blockers applied early in the information life cycle can force good ideas to escape without ever being realized, or not realized in time.” What are some examples of the blockers they mention that can ruin these ideas early on?