PERCHE' QUESTO BLOG / THE REASON WHY OF THIS BLOG
Ho creato questo blog per parlare di sociologia e ricerche di mercato, "fare ricerca sul campo" e condividere opinioni e professionalità.
I have done this blog in order to speak about sociology and market research, to do survey and share opinions and skills about this topic.
I have done this blog in order to speak about sociology and market research, to do survey and share opinions and skills about this topic.
The Life style and advertising 1. WHAT IS THE LIFE STYLE? In sociology, a lifestyle is the way a person (or a group) lives. This includes patterns of social relations, consumption, entertainment, and dress. A lifestyle typically also reflects an individual's attitudes, values or worldview. Having a specific "lifestyle" can be described as patterns of behavior based on alternatives given and how easy it is to make this choice over others given. The term "lifestyle" first appeared in 1939 Alvin Toffler predicted an explosion of lifestyles ("subcults") as diversity increases in post-industrial societies. Pre-modern societies did not require a term approaching sub-culture or "lifestyle", as different ways of living were expressed as entirely different cultures, religions, ethnicities or by an minority. In business, "lifestyles" provide a means of targeting consumers as advertisers and marketers endeavor to match consumer aspirations with products. An organization that decides to operate in some market normally can not equally serve all the customers in that market. These customers may be too numerous, too widely scattered and especially too heterogeneous in their needs and wants. Recognizing that those heterogeneous markets are actually made up of a number of smaller homogeneous submarkets, Smith (1956) introduced the concept of market segmentation – the process of dividing the total market into several relatively homogeneous groups with similar product or service interests, with similar needs and desires. From then on, market segmentation became the core concept of fine-tuned target marketing and communication campaigns.[i] Of course, many criteria can be used to assign potential customers to homogeneous groups. Commonly, these variables are grouped into three general categories[ii]: • Product-specific, behavioural attribute segmentations classify consumers focusing upon their purchase behaviour within the product category or the benefits the consumer expects to derive from a product category. • General, physical attribute segmentations of consumers, which use criteria as geographic, demographic or socioeconomic variables to create homogeneous target markets. • General, psychological attribute segmentations, which utilize profiles of consumers developed either from lifestyle analyses and sort into groups on the basis of things they like to do, how they like to spend their leisure time, and how they choose to spend their disposable income. This kind of segmentation is often called ‘psychographics’. In contrast to personality which describes consumers from an internal perspective, lifestyle is concerned with consumers’ overt actions and behavior. The Lifestyle is a mode of living as identified by a person’s activities, interests, and opinions; self-concept often translates into a person’s lifestyle, or the way that he or she lives his or her life.[iii] For example, a person may be very materialistic; preferring to wear flashy clothes and drive expensive cars, or prefer instead a simpler life with fewer visible status symbols. Attempts have been made to classify consumers into various segments based on their lifestyles. The Values and Lifestyle (VALS) Project, developed by the Stanford Research Institute (SRI)[iv], attempts to classify people based on a combination of values and resources. Thus, for example, both "Achievers" and "Strivers" want public recognition, but only the Achievers have the resources to bring this about. 2. THE LIFE-STYLE ‘S IMPORTANCE IN THE MARKETING William Lazer introduced the concept of life-style patterns to marketing in 1963[v]. Many researchers subsequently applied the concept to diverse applications and supplemented demographic information with descriptions of consumers’ activities, interests, and opinions. For example, Alpert and Gatty (1969) showed how is possible to define a product’s position using psychographic variables. Plummer (1971) applied psychographic segmentation to commercial bank credit card usage and Reynolds and Darden (1972) to outshoppers. Wells in 1985 asserted that behavioural and attitudinal aspects of lifestyles may be expected to change over time, making it essential for marketers to continually monitor the congruence of psychographics and demographics as well as media usage and shopping behaviours. In a second time other researchers like Zeithaml (1985) described the changes in demographic characteristics and the increased fragmentation of supermarket shoppers. Consumer purchases are influenced from how they decide to organize their lifestyle and this depends from what they find interesting, and how they view themselves and the world around them. Understanding these factors we can forecast the product choices, brand choices, purchase timing and purchase amounts. Then in the last years marketers have worked in order to understand how consumers in their target markets see their lives since this information is the key to developing products, suggesting promotional strategies and even determining how best to distribute products. Marketers use Life-style to: · Identify consumer segments · Design ad messages that appeal to certain lifestyles and then to place these ads in media that will be seen by a particular lifestyle segment · Uncover unfulfilled needs of lifestyle segments and develop new products In general, lifestyle research is based on extensive surveys using appropriate quantitative methods. The aim is to combine the motivational research, that gives us like result a lot of data on a few individuals, and the quantitative survey research , that yields a small amount of information on a lot of people. Using psychographics surveys we can understand not only ”who buys what” but why people buy what they do. 3. HOW IDENTIFY THE LIFE –STYLE We can distinguish different waves to identify the consumer’s life-style: 3.1 The AIO approach The psychographic or lifestyle research, developed in the 1960’s and ’70’s, usually takes as its point of departure extensive and ad hoc AIO (activities, interests and opinions) surveys, which then lead to often very colourful and useful lifestyle typologies using the technique of cluster analysis[vi]. AIO refers to measures of activities, interests and opinions. Activities are manifest actions (work, hobbies, social events, vacation, entertainment, clubs, community, shopping, sports, etc.). Interest in some objects, events or topics (family, home, job, community, recreation, fashion, food, media, achievements, etc.) is the degree of excitement that accompanies both special and continuing attention to it. Finally, opinions are descriptive beliefs (of oneself, social issues, politics, business, economics, education, products, future, culture, etc.). Three typical statements could be: • I often listen to classic music (activity); • I am very interested in the latest fashion trends (interest); • A woman’s place is at home (opinion). Often very large batteries of AIO items were used. For example, Wells and Tigert (1971) formulated 300 AIO items, while Cosmas (1982) used a questionnaire containing 250 AIO items.[vii] 3.2 The value systems approach In a second wave of research, the value concept came to replace this very extensive and burdensome AIO approach. Values are commonly defined as desirable, trans-situational goals, varying in importance, that serve as guiding principles in people’s lives. The most important instrument for measuring values is the Rokeach Value Survey (Rokeach, 1973). His inventory comprises 18 values: (• A comfortable life; • An exciting life; • A sense of accomplishment; • A world at peace; • A world of beauty; • Equality; • Family security; • Freedom; • Happiness; • Inner harmony; • Mature love; • National security; • Pleasure; • Salvation; • Self-respect; • Social recognition; • True friendship; • Wisdom.). A shorter and more easily implemented instrument is the List of Values (LOV), suggested by Kahle (1983)[viii], including only nine values. Now, values are of particular interest because values may affect a wide spectrum of behaviour across many situations. Indeed, individuals’ value priorities are part of their basic worldviews; therefore, values are also important lifestyle determinants. Moreover, values are broader in scope than attitudes or the types of variables contained in AIO measures, they transcend specific situations. Finally, value inventories in general often only contain a handful of values, instead of 200 or 300 AIO items. This led researchers of the second wave of lifestyle research to use value batteries as input for their questionnaires, which proved to be much more elegant and fundamental than the AIO approach. 3.3.The life vision approach There is a third approach in order to identify the life-style: using the way people ‘look at life’. This is the ‘life visions’. Life visions then can be defined as the perspective people take on some major issues in life. This can be measured using the major points of attention in contemporary western culture, including such things as health, beauty, male/female identities, work/money/time considerations, the use of leisure, partner relations, family relations, friends, culture, politics, economics and science. For each item, is possible to formulate two polarized visions, for instance, for ‘male/female identities’: • Men and women are fundamentally equal. The roles society prescribes for them should be abandoned. • Men and women are fundamentally different. Therefore, society must permit men to act as a true male and women to act as a true female. Respondents will be asked to indicate on a seven-point scale how strongly they agree either with the vision on the left or with the vision on the right of the seven-point scale (cf. a semantic differential). 4. THE STRENGTH OF THE LIFE-STYLE THEORY Psychographics has proven to be a very useful tool for organisations in their marketing research. It identifies target markets that could not be isolated using only demographic variables. Psychographics are designed to measure the consumer's predisposition to buy a product, the influences that stimulate buying behaviour, and the relationship between the consumer's perception of the product benefits and his/her lifestyle, interests and opinions. Often researchers have turned to psychographics because of the limitation encountered in demographics. An advantage of psychographics is that it describes segments in terms directly relevant to advertisement campaign and market planning decisions of organisations. It has also appealed marketers for its power to combine the richness of "motivational research" with the statistical sophistication of computer analyses and, provide corporate strategists with rich descriptive details for developing marketing strategy; it has the ability to give marketers a big image of the consumer's lifestyle. There is also the appealing advantage that psychographic segments which are developed for markets in one geographic location are generalizable to market in other geographic locations. Psychographics are essential for discovering both the explicit and the hidden psycho-social motives that so often spell the difference between acceptance or rejection of the brand. Consumer behaviour research and communications research can provide useful information on children’s and parents’ attitudes, perceptions, and behaviour and provide information on media channels that can best reach targeted groups. 5. THE WEAKNESS OF THE LIFE-STYLE THEORY Psychographic research has been criticized for a number of problems associated with its measurement and the validity of items arbitrarily selected for surveys. The main points of criticism are[ix]: • The methods used are purely inductive and not guided by theory. Often, the items used in lifestyle questionnaires are based on common sense reasoning and implicit experience in carrying out market research and there are no standardized methods to evaluate the stability of results of psychographic techniques. Therefore it will throw doubts in whether the segment and market targeted are reliable or not. The main problem is that psychographics attempt to measure intangible and diffuse concepts, values and attitudes are not easy to measure as every single person has a different personality and consequently have different opinions and interests. Sometimes, the questions are asked in such a way that only touches the surface of psychographic questions, and the underlying issues are overlooked or ignored. • The explanatory value of lifestyle types or dimensions concerning consumer behaviour is low and not well documented. When it has been attempted to relate purchase data and lifestyle data in such a way that the amount of variance in the former explained by the latter can be ascertained, the amount of variance explained has often been very modest, sometimes even below the variance explained by demographic variables alone (Wells and Tigert, 1971). As Wells (1975) put it in a review article: ‘Stated as correlation coefficients these relationships appear shockingly small – frequently in the .1 or .2 range, seldom higher than .3 or .4.’ [x] . The option for different dimensions (values, life visions, aesthetic style and media preferences) that are more reflective of lasting personal characteristics and behaviours, compared to the more variable and superficial AIO items, certainly improves the reliability of the research instrument. Anyway some authors claim that the use of psychographics or lifestyle research remains even today one of the least understood but potentially most powerful approaches in market and communication research (see, for example, Gunter and Furnham, 1992: 30; Heath, 1995; Wolburg and Pokrywczynski, 2001). 6. IMPLICATION ON THE ADVERTISING In the broader realm of marketing and advertising, psychographic segmentation focuses on identifying the likes, opinions and attitudes of a particular group of people and creating messages that cause people to identify with those ideas. This is distinct from demographic segmentation strategies like sociocultural and socioeconomic targeting, which look at customers based on age, gender, etc. Despite these challenges, properly applied psychographics can play a powerful part in developing effective digital signage content and other advertising techniques If we want to advertise a packaged rice side dish on digital signs in a national advertising campaign, where the customers span multiple disparate demographic groups instead of creating separate versions of a commercial spot focusing on demographic differences, (e.g. showing different races, different ethnic foods on the table, different social classes, etc.), we might create a single version of the ad using images catering to a particular psychographic profile. For example, across multiple demographic groups there is a feeling that homemade meals are more valuable or somehow better than a pre-packaged or takeout meal. Home-cooked meals require effort on the part of the cook, and therefore represent not only sustenance, but also the value of the cook's time and their care for the meal being delivered. In this example, our target customer become someone trying to recreate these positive feelings in their home, regardless of that shopper's age, gender, family size, or how much time they have to prepare dinner. To reach them, we might utilize a group of images (featuring different demographic groups) showing the product being cooked, featured as part of a larger home-made meal, and bringing a family, a group of friends, or a couple on a dine-in date closer together. By associating the product with concepts, attitudes and opinions that are popular across many demographic groups, a single spot can do the work of many. This technique also reduces the amount of content that needs to be developed and managed, which in turn helps to rein in costs and reduce complexity (an important but often overlooked detail of large retail networks). We can have problems when the brand try to use psychographic segmentation to create artificial archetypes of the "average" consumer, family, etc. The thought here is that if we fill your product packages with images of totally unremarkable, average people engaged in the desired activities (and hence sharing the same interests, etc.), the product will be able to resonate with the widest possible buying audience. In practice, this method doesn't really work very well, especially not in such an ethnically and racially diverse country. Instead of images of "average" people, we instead get creepy images of bland, blank-staring, and wholly unremarkable people that seem not real. Some of these images are even computer-generated, using an algorithm designed to blend the traits of various ethnic groups into a single portrait so the kid on the front of a box of Life cereal supposed to be a psychographic archetype. Fortunately, the tendency to use psychographics as a generic, thoughtless substitute for demographic segmentation seems to be going away, and more savvy marketers are successfully employing psychographic techniques to communicate values, ideals and opinions to the right group of shoppers. And while demographic segmentation will likely continue to be the primary means for creating targeted messages on large population, the proper use of psychographics offers the opportunity to do more with less. By building the imagery of different ideals and opinions into content that can appeal to multiple demographic groups, an in advertising campaign can be expected to deliver greater relevance to consumers, lower production cost, and higher incremental sales lift for marketers. 7. CONCLUSION Lifestyle research emerged from the recognition that important demographic distinctions simply do not exist in many product categories and even where they do, one cannot intelligently decide how to attract any particular market segment unless one knows why the distinctions exist. In order to attract and motivate a particular group of consumers through communication campaigns, one must gain insight into their psychological profile, i.e. their lifestyle. It is possible to develop robust and balanced general lifestyle typologies (using values, life visions or aesthetic style preferences alone, or in combination) that can be used by communication and marketing managers for strategic segmentation decisions across very different markets. These lifestyle typologies often outperform classic demographic and socioeconomic segmentation variables in terms of product benefit or attribute evaluation. A global typology, combining sections on values, life visions, aesthetic style preferences and media preferences, not only provides the richest data (for communication strategists, creative and media planners), but also yields the best discriminative performance compared to other lifestyle segmentation methods. Creating a lifestyle brand is not the only (or the best) path to sustainable success but one useful strategy for attacking a market. Bibliography guidelines [i] Armstrong, G. Kotler,P. Saunders. J, Wong .V (2002) Principles of Marketing, (Prentice Hall, Financial Times) [ii] Patrick Vyncke,(2002) From Attitudes, Interests and Opinions, to Values, Aesthetic Styles, Life Visions and Media Preferences, European Journal of Communication , (SAGE Publications ) [iii] H.Maslow (1970) Motivation and Personality, (Harper&Row, New York) [iv] Armstrong, G. Kotler,P. Saunders. J, Wong .V (2002) Principles of Marketing, (Prentice Hall, Financial Times) [v] Faye W. Gilbert (1995) Psychographic Constructs and Demographic Segments, University of Mississippi, Vol.12 [vi] Punj, G. and D.W. Stewart (1983) ‘Cluster Analysis in Marketing Research: Review and Suggestions for Application’, Journal of Marketing Research 20 [vii] Wells, W.D. and D. Tigert (1971) ‘Activities, Interests and Opinions’, Journal of Advertising Research 11: 27–35. [viii] Rokeach, M. (1973) The Nature of Human Values. New York: Free Press. [ix] Patrick Vyncke,(2002) From Attitudes, Interests and Opinions, to Values, Aesthetic Styles, Life Visions and Media Preferences, European Journal of Communication , (SAGE Publications ) [x] Wells, W.D. (1975) ‘Psychographics: A Critical Review’, Journal of Marketing Research 12: 196–213.
International Surveys 1. THE CONCEPT TEST Concept testing is the attempt to predict the success of a new product idea using qualitative methods to evaluate consumer response prior to the introduction of a product to the market. These methods involve getting people’s reactions to a statement describing the basic idea of the product that have innovative rational or non-rational benefits. It not only gives promising ideas a fighting chance, it provides guidance for the communication of benefits, uses, packaging, advertising, sales approaches, product information, distribution, and pricing. Usually the concept test is performed using field surveys, personal interviews and focus groups to generate and evaluate product concepts. 2. FACTORS AFFECTING SAMPLING AND DATA COLLECTION There are several distortions during the data collection in a concept test developed, in particular using focus group or personal interview[i] 1) Bias related to the researcher:: by distributing its questionnaire, the researcher can give to the respondent various clues on the expected answer and this contains a risk to influence the respondent, besides researchers themselves vary in what they observe or measure (observer variability), for example, researchers may be selective in their observations (observer bias); measure, question or note down answers with varying accuracy or follow different approaches (one being more open, friendly, probing than the other). 2) Bias related to the non-respondent and the undecided. It is not possible to ignore who do not answer because they have precise characteristics. It is a good way in order to develop the survey in a right way to identify these people, comparing the characteristics of those who do not answered some questions with the characteristics of those who answered all the questionnaire , so the researcher can understand if some groups (age, salary, sex, schooling ...) tend to answer in a different way. 3) Bias related to the respondent. Everyone who answers a survey will tend to give a positive image of itself trying to give the “good answer” influenced by what is socially desirable or acceptable. This bias can, partially, be circumvented by clearly specify the procedures of confidentiality of the survey. A similar problem is also possible if the respondent considers that subject is too trivial, in fact he might not make the effort to answer seriously. In this case the answer will rather be some general ones with no specific relation with the opinion of the respondent. 4) Bias related to the sample. A sample should be a scaled model of the population from which you seek data but the sample can contain some distortions: undercoverage, overcoverage, and multiplicity. The undercoverage is the failure to include all units belonging to the defined population in the frame, the overcoverage as a failure of the frame resulting from inclusion of elements that are not part of the target population, and multiplicity as multiple frame units linking to the same sampling unit in the population. To identify people, who are not in the sample, is possible to compare the characteristics of the respondents with statistical data from a reliable source. In this way is possible to discover if some parts of the population (age, salary, sex, schooling) are not represented in the good proportions. 5) Bias related with the structure of a questionnaire. The structure of a questionnaire must be planed with care. A questionnaire can be seen as a "thinking sequence" in which the respondent should engage himself. This sequence must give the respondent a feeling of continuity without suggests answers to the respondent. Another problem ca be the use of unstandardised measuring instruments. For example, we may use unstandardised weighing scales or imprecise or no guidelines for interviewing. 6) Bias related with the choices of answers and with the goal of the questions. In this case we will have a poor quality of gathered data. 7) Bias related to the method of research. Another place where a source of errors can show up, arise from the research method used in particular in a concept test when a researcher is engaged in a procedure of interviews. This method often implies a strong interaction between the researcher and the respondent. So, there is a possibility that the exchanges of information might take the researcher out of his frame of search and because of this interaction, the researcher can be brought to ask some questions in a different fashion from a respondent to another. Then it might be difficult to compare the answers. Solution to this problem lies in the follow-up of a rigorous plan of interview. So, the researcher is enable to track down the elements which can lead the searcher away from its target. 3. INTERNATIONAL CONCEPT TEST GUIDELINE : USA, Germany, Venezuela The survey implementation is a key element that determines whether survey data is of a good quality or not. Each step should be planned and reviewed carefully in order to get a better quality. A plan for data collection should be developed so that there is a clear overview of what tasks have to be carried out, who should perform them, and the duration of these tasks and human and material resources for data collection should be organized in the most efficient way. 3.1 The first step: to understand the market The market for soft drinks differs distinctly between the different countries: for example there is a different level of consumption in litres per year and new drink concepts are still underrepresented in Eastern Europe until now but new drink concepts are constantly in demand in saturated markets. There are also different trends based on the combination of convenient package with the desired added benefit in the drink like for example drinks with an added health benefit, innovative value added ingredients or new packaging that support a life-style etc. Some drinks may be important sources of nutrients or other components for certain population subgroups (for example, young children and pregnant women, people that doing sport) but may not be important in the diets of the general public. Therefore, accurate data for representative samples are required and the sampling plans could be developed to provide analytical data to assess the levels of copper intake by specific groups or individuals. Important is also investigate the physical product and the subjective image of the product, which benefits consumers are looking for and these must be conveyed in the total product package. Physical characteristics include range, shape, size, colour, quality, quantity and compatibility. Subjective attributes are determined by advertising, self image, labelling and packaging. In manufacturing or selling produce, cognisance has to be taken of cost and country legal requirements. [ii] 3.2 The second step: to consider the special problems in international marketing research Multiple markets need to be considered each with unique characteristics, availability of data and research services. Methodological difficulties may be encountered like nuances of language, interpretation. In a country like Venezuela is possible to have difficulty of fieldwork supervision, cheating, data analysis difficulties (lack of computer technology) and is possible to find infrastructure difficulties - lack of telephones, roads, transport, respondent locations besides there are always cultural difficulties - reluctance to talk to strangers, inability to talk to women or children (for example in some specific sub-groups), legal constraints on data collection/transmission. The two most important modes of scanning are surveillance and search, each giving data of a general or specific kind, invaluable to the strategy formulation process even if in all decisions whether to obtain data or not, costs versus benefits have to be considered carefully. Factors that should be considerate: Differing usage conditions: climate, skills, level of literacy, culture or physical conditions. For example Venezuela is a country that all year has a very warm whether, Germany is a cold country where people has a strong habit to drink beer like a soft drink, USA are very large with very different climates conditions and several subgroup and religions (Chineses, Hispanics, Irelands etc..) General market factors - incomes, tastes etc. Some things may be very affordable in some countries and not in others. Government - taxation, import quotas, non tariff barriers, labelling, health requirements. History. Sometimes, as a result of the history change the local conditions. Financial considerations. In order to maximise sales or profits the organisation may have no choice but to adapt its products to local conditions. Pressure. Sometimes, as in the case of the EU (Germany), suppliers are forced to adapt to the rules and regulations imposed on them if they wish to enter into the market. Product packaging, labelling, physical characteristics and marketing have to adapt to the cultural requirements when necessary. Religion, values, aesthetics, language and material culture all affect production decisions. 3.3 The third step: to select the sample Select a good sample is very important to use face to face interview. It is important to avoid using multiple frames, whenever possible and is important when the survey is developed in different countries use the same frame for surveys with the same target population. A source of data distortion is for example the duplication of the data in the frame and the no updating for births, deaths and out-of scope units and in general the level of quality of the frame. In order to assure the quality of data collected is important to develop some stage in the data collection process Chose the appropriate season(s) to conduct the field work (if the problem is season-related or if data collection would be difficult during certain periods) Verify the accessibility and availability of the sampled population, and public holidays and vacation periods in each country. Train research assistants carefully in all topics covered in the field work manual as well as in interview techniques and make sure that all members of the research team master interview techniques such as: asking questions in a neutral manner; not showing by words or expression what answers one expects, not showing agreement, disagreement or surprise; and recording the answers precisely as they are provided, without sifting or interpreting them. Develop a pre-test research instruments and research procedures with the whole research team, including research assistants Arrange for on-going supervision of research assistants. If, in case of a larger survey, special supervisors have to be appointed, guidelines should be developed for supervisory tasks. The sample should target the de facto population (that is to say, all people living in that country including guest workers and immigrants) and not the de jure population (the citizens of that country alone). It is important to create good representation as the "miniature" of the country’s overall population and have a full geographical coverage. The size of the sample must be adequate to provide good (robust) estimates of the quantities of interest at national or sub national levels depending on the objectives of the survey. For various purposes, it may be required to have adequate representation of minorities (for example, ethnic or other subgroups) which may require oversampling (that is to say, giving a higher probability of selection). In case of oversampling, differential weighting at the data analysis stage should be applied to correct the distortion caused by oversampling; A sampling frame with 90 per cent coverage of all key subgroups of interest is considered acceptable. It is better use the most recent sampling frame available.[iii] The possible impact of variations of specific definitions in different countries (like for example “household” or “manager” or “sportive people”) on sampling should be elaborated in country reports. It is very important to identify the use of the specific definitions in different countries and to understand if there are differences. Use of strict probability methods every stage of sampling is crucial, and makes it possible to extrapolate the sample data to the whole population. Otherwise, the survey results will not be representative and valid 3.4 The fourth step: to organize the survey If the survey is developed in different countries a good and strong central organization of the survey in each country will help ensure quality. The purpose of establishing standard procedures is to help ensure that the data collection is relevant and meaningful for the country’s needs and that the data can be compared within a country and across countries to identify the similarities and differences across populations. Each survey team should prepare a central survey implementation plan and a task calendar in which the details of the survey logistics are laid out clearly. This plan should identify how many focus groups and face to face interviews are needed to cover an identified portion of the sample in a given region. Each survey team should have a supervisor who oversees and coordinates the work of the interviewers, as well as provides on-site training and support. Supervisors should set out the daily work at the beginning of the workday with the interviewers and review the results at the end of the day. In this review, interviewers will brief their supervisors about their interviews and results. A daily logbook should be kept to monitor the progress of the survey work in every country survey centre. Information must be maintained on each interviewer so that his/her work can be monitored by the supervisor on an ongoing basis. In each country should be conduct a pilot survey at the beginning of the survey period, which should last a week or two. The pilot should be used as a dress rehearsal for the main survey. Fifty per cent of the pilot sample would then be reinterviewed by another interviewer to demonstrate the stability of application of the interview. The data from the pilot should be rapidly analysed to identify any particular implementation problems. Since the instrument to be used in the survey would already have undergone extensive pre-testing prior to the pilot, the intention of the pilot testing should be to identify minor linguistic and feasibility issues and enable better planning for the main phase. It would also be expected to identify some obvious particular mistakes in skip patterns, etc. in the survey. Feedback from the pilot will correct these errors and allow for minor adjustments to be made. All countries should send a copy of the printed documents. Response rates should be monitored continuously and each centre should employ a combination of various strategies to increase participation in the survey and reduce non-response. The response rate may vary across countries and has to be compared with that of other surveys in the same country. Local customs and traditions must be taken into account in the evaluation. Each survey should be evaluated within the context of the country. It is essential to compare with other comparable surveys in the same country.[iv] 3.5 The fifth step: to write the concept statement and the questionnaire It is very important don’t make the statement just a straightforward “objective” description of the features of the product and also don’t make the statement too “salesy.” Instead, a balance must be struck between a dry spec sheet and a slick brochure. It is important starting with a brief description of the present situation, in words the target audience uses. It is usually best to describe the concept as a solution to a problem. Then, continue with the product description in benefit terms. Then describe the product itself, and how the product’s claims will be substantiated. In this step is very important to pay attention that in different countries there are different mentalities and decision-making processes. When we cross some data, in an exploratory analysis, trying to verify if there are some differences in the perception of the decision-making process in different countries we can se how for example in USA people take their decisions more individually than collectively instead in Venezuela people make their decisions more collectively than individually.[v] To make meaningful comparisons of data across cultures is important to have a relevant instrument that measures the same construct in different countries so the translation is one of the key features of ensuring the equivalent versions of questions in different languages. Given the multicultural societies it is essential that we have good translations that measure the same concepts in the survey. In a country like USA, the instrument could be translated into multiple languages depending on the size of the different language groups within the country. Each linguistic group that constitutes over 5 per cent of the population should be interviewed in its own language. For respondents who are interviewed in a language for which a formal translated version has not been produced, emphasis is placed on the understanding of key concepts. Important is the maintaining the equivalence of concepts and ensure a procedure that identifies possible pitfalls and avoids distortion of the meaning. These guidelines stress that: a) translation should aim to produce a locally understandable questionnaire b) the original intent of the questions should be translated with the best possible equivalent terms in the local language c) question-by-question specifications should aim to convey the original meaning of the questions and pre-coded response options d) the questionnaire should first be translated by experts who have a basic understanding of the key concepts of the subject-matter content. A set of selected key terms and those that proved to be problematic during the first direct translation should be back-translated by linguistic experts who would then comment on all the possible interpretations of the terms and suggest alternatives. It is mandatory to translate all the documents (namely, question-by-question specifications, the survey manual and training manuals) into the local language. Each country should submit a report on the quality of the translation work at the end of the pilot phase. For items that were found to be particularly difficult to translate, specific linguistic evaluation forms should be requested that describe the nature of difficulty of translation. 4. CONCLUSION It is extremely difficult to be exhaustive in enumerating problems which can have an influence on the precision of the results of a international study. The qualitative methods present a particular risks because of the interaction between the researcher and respondent, but quantitative studies are not free of any problem. The stakes in the structure of a questionnaire are sufficient to derails the results of a study. Developing a concept test is important that the concept statement have to be written and rewritten, for successive groups, continually taking into account what has been learned before in order to elicit qualms, objections and concerns as well as praise, new uses and new ways of describing it. Is very important consider cultural differences and local characteristics. Counted words : 3.076 [i] Kalsbeek, D. Virginia M. Lesser , “Non sampling error considerations in environmental surveys”, (Oregon St. Univ, , Univ. of N.C.) [ii] Andrea Hassol , Brenda Rodriguez (2004) “Survey Completion Rates and Resource Use at Each Step of a Dillman-Style Multi-Modal Survey” , (Cambridge Univ) [iii] Abdelhay Mechbal and Christopher J.L. Murray, Somnath Chatterji, T. Bedirhan Üstun, ( 2002), “The World Health Survey (WHS) quality standards and assurance procedures” Geneva, Switzerland [iv] Üstün, T.B. and others (2001). Disability and Culture; Universalism and Diversity. Göttingen, Germany [v] Freitas , H. Jenkins, M. Moscarola, J, Zanela ,A. (1998) “A survey research design to better know the decision-makers, first results: inside & outside The USA. Bled, Eslovênia Social”, (Organizational and Cross-cultural Issues Surrounding EC, 8-10 Junho)
Focus Groups and Individual Depth Interviews 1. QUALITATIVE ANALYSIS Qualitative research is a research which is undertaken using an unstructured approach with a small number of targeted individuals. This type of survey produce non-quantifiable insights into behaviour, attitudes and motivations. Focus group discussions and individual depth interviews are the most commonly used forms of qualitative research. 1.1 Groups Discussion Focus groups are interviews with small groups of relatively homogeneous people with similar background and experience. It usually consists of 6-10 individuals who come to a central location to sit around a table to answer questions from a trained moderator or facilitator. The moderator has an outline of questions, usually three to four pages in length, which cover all the topics he wants to discuss during the session. The interviewer introduces the subject, guiding the discussion, cross-checking each other comments and encouraging all members to express their opinions. Participants are asked to reflect on the questions asked by the interviewers, provide their own comments, listen to what the rest of the group have to say and react to their observations. The main purpose is to elicit ideas, insights and experiences in a social context where people stimulate each other and consider their own views along with the views of others. Typically, these interviews are conducted several times with different groups and in different cities so that the evaluator can identify trends in the perceptions and opinions expressed, in fact if you do multiple groups, and discover the same points in all groups, it is more likely that the conclusions you draw will be relevant to the larger market. The fact that the focus group members may have similar demographic characteristics helps the researchers to focus on the responses, with a high degree of confidence that the responses from these individuals are generalizable to a larger population. In general, focus groups are usually used in one of three ways: 1) They are used as a self contained method in studies in which they serve as the principal source of data. 2) They are used as a supplementary source of data in studies that rely on some other primary method such as a survey. 3) They are used in multi-method studies that combine two or more means of gathering data in which no one primary method determines the use of the others. Focus group research is a useful strategy-building tool for harvesting information from customers, competitor customers, suppliers, and employees. Focus groups are often an excellent starting point when scanning and uncovering opportunities for new products, branding, naming, positioning, and generating strategic options, in fact a well designed focus group study can help decision makers understand the range of beliefs, opinions and buying behaviour among key segments.[i] They are helpful in order to better understand the phenomenon and in forecasting the future in fact using this qualitative method reactions, discussion, supporting and contrary points can all be brought to light, and added into the discussion. While focus groups are one of the most popular qualitative techniques, it's important to understand the "do's and don'ts" and situations where they should be used, and where they should be avoided. It's a technique that's easy to misuse and abuse. Different from statistically reliable public opinion and market surveys, online surveys, and other quantitative techniques, these qualitative methods should not be used for market sizing, measuring consumer or B2B brand preference, brand position, customer satisfaction or buying or product usage behaviour. They are best suited for uncovering the spectrum or range of views, beliefs, attitudes, opinions, and experiences. This helps build assumptions and generate ideas which may warrant further assessment. It is right to use focus group for : 1) To learn about the range of beliefs, attitudes, and usage habits of the target segment. The goal is to hear and understand the range: if it's said once in a focus group, it's important. 2) To become acquainted with unfamiliar territory. Group research is achieves some fast track knowledge about new market, new segment or new product categories. If you've found an interesting opportunity, but know little about the market, it's an excellent use of groups. 3) To screen concepts. Concept screening is valuable in the opportunity scan stage. The focus group setting is suitable for screening a range of concepts: product ideas, advertising, store design, web experience, and brand names. 1.2 Individual depth interviews When using key informant interviews, questions are predominately open-ended rather than closed ended, and are best used for exploring issues or understanding the thinking behind attitudes, perceptions and behaviours. In-depths are conducted using a one-to-one (respondent-moderator) format, and thus generate highly detailed, qualitative feedback. In this method, respondents are interviewed separately instead of in a group and most often in person. There are key characteristics that differentiate an in-depth, qualitative research interview from a regular interview[ii]. 1) Open-ended Questions. Questions should be worded so that respondents cannot simply answer yes or no, but must expound on the topic. 2) Semi-structured Format. Although it is important to have some pre-planned questions to ask during the interview, it is necessary that questions to flow naturally, based on information provided by the respondent without a specific order. In fact, the flow of the conversation dictates the questions asked and those omitted, as well as the order of the questions. 3) Seek understanding and interpretation. It is fundamental try to interpret what respondents say and as well as seek clarity and a deeper understanding from the respondent throughout the interview. 4) Conversational. There should be smooth transitions from one topic to the next. 5) Recording responses. The responses are recorded, typically with audiotape and written notes. 6) Record observations. It is important to observe and record non-verbal behaviours on the field notes as they occur. 7) Record reflections. In essence, in-depth interviews involve not only asking questions, but the systematic recording and documenting of responses coupled with intense probing for deeper meaning and understanding of the responses. Thus, in-depth interviewing often requires repeated interview sessions with the target audience under study. In-depth interviews consist two types: full in-depths, and mini-depths. The primary distinction between the two is length: in-depths last 1½ hours, while mini-depths last 45 minutes to an hour. In-depths are better-suited for discussions that require a highly detailed exploration of an issue, while mini-depths are better-suited to less technical topics. In most situations requiring an in-depth technique, mini-depths are preferred over full in-depths for cost and efficiency reasons. In-depth interviews often take place at a focus group facility, but in-depth interviews can take place anywhere that respondents are. In-depths are usually audio-taped or videotaped. Written transcripts of in-depths are commonly provided (more so than for focus groups) given the richness of verbatim responses, and a better ability to follow lines of questioning. It is possible to use in-depths at any point in the marketing process when it is important to explore a problem in great depth or detail or in situations when focus groups are neither appropriate nor practical for the audience of interest. It is helpful to use in depth interview when there are problem of privacy and sensitivity[iii], (for example, they are often used in areas of personal hygiene, or among sufferers of an embarrassing condition) in fact this method provides a format in which respondents can speak openly about private or sensitive issues that could not be discussed in a group setting, besides in-depth interviews allow to get individual attention because the moderator can give individual attention to each respondent, probing in depth every thought and opinion. Finally in-depth interviews can be extremely helpful if respondents are difficult to pin down (doctors, C-level executives) or live far away from one another. Unlike focus groups, in-depth interviews can be conducted by phone, so respondents can be reached anywhere, and they don’t have to take time to travel to a facility. In general, they may be used for the same purposes as focus groups. Most often, they are used to develop a detailed look at consumer attitudes, motivations, and buying behaviours and purchase decision-making process. As compared with focus groups, one-on-one interviews eliminate any bias that might be introduced from one respondent to another. They also allow more time for each respondent to talk since there is no competition for airtime. And, they are efficient since it is easier to keep one individual on track than it is a group of people. When conducted by an interviewer that is skilled in asking open-ended questions and able to build rapport quickly, they can provide information that is difficult to obtain by any other means. One-on-one interviews are often used in lieu of focus groups where it is not practical to gather people in one place. As with focus groups, they are especially effective when things must be seen or touched, etc. in order to be evaluated. Testing products and communications materials are often done with this method. Anyway one-on-one interviews are highly dependent on the skill and training of the interviewer to develop the conversation and to avoid biasing the answers. Using in depth interviews is very important designing an interview guide with questions and probing follow-ups that helps to stay on track; helps insure that important issues/topics are addressed; provides a framework and sequence for the questions; and helps maintain some consistency across interviews with different respondents. There are three basic parts of the interview guide: face sheet, actual questions, and post-interview comment sheet. The face sheet is used to record factual information such as time, date, and place of the interview, any special conditions or circumstances that may affect the interview are recorded, demographic information. The actual interview questions, probing questions or statements, and anticipated follow-up questions comprise the second part of the interview guide. The final part of the interview guide provides a place to write notes after the interview that detail researcher feelings, interpretations, and other comments. 2. QUALITATIVE DATA ANALYSIS IMPORTANCE Qualitative research is an important sector of the research industry but it is not always appropriate to every research problem and it is particularly important to see the world through consumer’s eyes and understand the bases for their attitudes and behaviour, in fact it provides insight. Quantitative research concerns with describing and measuring while qualitative research is all about explaining and understanding. The two types of research different in a number of ways, using qualitative or qualitative depend on the type of the problem that the researcher wants solve, the approach and the techniques of analysis. Qualitative marketing studies are best suited when these situations exist[iv]: § We are in new territory and little is known. When considering products or new markets, qualitative research can deliver an early landscape profile of consumer or business buyer attitudes and behaviour. § Customer perceptions or attitudes may be hidden from easy view. When the product category may represent unspoken meaning to buyers, qualitative market research may provide needed tools. § Generate ideas for products, advertising, or brand positioning. The nuances of buyer attitudes and beliefs can often provide stimulus for fresh new ideas, and feed a formal idea generation process. § Screening ideas and concepts. Qualitative market research can be a useful first step, prior to quantitative research, to screen new advertising, product, or positioning concepts. This allows time for refining concepts prior to quantitative market research. 3. METHODS OF DATA ANALYSIS There are many varied techniques to collected the data. Often the data are collected asking little, asking only open-ended and non-value-loaded questions and using body language. Also silence is a powerful tool. There is no one right way to interview, the moderator have to gain and understanding of the group. Stimulus material can form a helpful part of many group discussion project, for example products, advertising, promotional materials , words and pictures. It is possible also to use projective techniques for example word association, story completion, bubble cartoons, role playing. There are not right ways to analyse qualitative data. Methods of analysis depends upon the skill, experience and preferences of the researcher , the goals of the study and a other factors. Analysis can be conducted either from the raw data (tape recording) or a transcription of the interview. Neither method is better , the researcher mist choose the preferred method and he have to be prepared to change in base of the different circumstances. There are two approaches to the analysis of qualitative data in market research. : the large-sheet-of-paper approach is the equivalent of manual cut and paste and involves breaking the transcripts down into text segments and allocating these under themes and headings identified deductively and/or inductively. This approach is clearly considered as inferior to the annotating-the-scripts approach which involves reading the transcripts (and/or listening to the audio tapes) and writing interpretive thoughts about the data in the margins. The benefits of this approach are that each transcript is considered as a whole rather than as a set of discrete responses and that it allows the analyst to re- experience the group, body language and tone of the discussion. Usually it is possible to analyze data coding, (the primary purpose of coding is to organize the data in a way that assists further analysis and interpretation) , cutting and pasting data, counting words or text segments, and using computers to assist with analysis. Since the 1980s qualitative researchers have been using computer programs to assist in the analysis of their data.[v] Early programs, such as The Ethnograph provided assistance with the clerical tasks associated with analysis including, for example, retrieval of all data with the same code. Certainly, one of the chief benefits of computer programs is recognised as their ability to alleviate the cutting, pasting and subsequent retrieval of field notes or interview transcripts. In addition, to providing clerical assistance, however, there are now a wide variety of programs such as HyperRESEARCH, NUD*IST, AQUAD and ATLAS/ti which can be employed to help researchers in theory development and testing. Some of the more advanced software packages make it possible to graphically display links between categories or concepts codes. Of course programs cannot replace the analyst's core role which is to understand the meaning of the text, a role which cannot be computerized because it is not a mechanical one. Programs can only be expected to support the analyst's own intellectual processes and particular programs will be more appropriate for different types of qualitative research as has been illustrated by in his recent collection of user reports across a range of projects. The use of computer programs to assist in qualitative data analysis can cause the loss of information[vi] , for example some researcher suggest that the general model of data marking and retrieval in CAQDAS is responsible for the increasing trend towards homogeneity in ethnographic research. It is true, as and point out, that the early code and retrieve facilities have been supplemented by a whole variety of additional features such as memoning, features for defining linkages between codes and hypertext systems. Anyway the loss of process dimensions is not confined to analysis employing computers, it is just as likely to occur in manual cut and paste operations . 3.1 Focus groups data analysis Cut and paste approaches, manual or computer, can fail to capture or even recognize some events in the unfolding story of the focus group: when there is a sequence to a focus group discussion that can help explain the different kinds of talk at the beginning (forming and storming), the middle (performing) and the end (mourning). when participants comments can be self-contradictory, in other words what they say at the outset of the group may be different from and directly contradict what they say later. Participants are often aware of these self-contradictions and point these out themselves, others may also point these out, and even help explain them. every time that participants change their views and opinions in the course of the discussion once they have had an opportunity to hear and reflect on other opinions, through introspection and retrospection. when participants expand later on experiences recounted earlier; adding new information, giving the experience a new and sometimes different interpretation or, simply placing this experience in the context of another participant's experience. Most programs have been designed in a manner that encourage the analyst to fracture the data and it is difficult to identify how an analysis of the interaction in focus groups can be undertaken when one step on-screen coding is employed. In other words, the analyst will need to work with the complete transcript as an off-screen document in order to identify the group’s dynamics. One possible approach is to trace issues and/or participants through each transcript from beginning to end. For example, we might be trying to identify the arguments relating to a particular issue that stimulate others to rethink their position and those arguments that may be discounted or challenged. The arguments can then be coded or labelled on a range of dimensions including, say, strength of response provoked, type and range of emotions evoked and so on. In summary working with focus group data, two separate codings, analyses and interpretation activities should take place: on-screen when we are dealing with transcript content (for example, in what ways can participants' experiences with a particular topic be categorized), off-screen when we are dealing with the interaction aspects of focus groups. 3.2 In-depth interviews data analysis Data analysis begins with the field note-taking of the interviewers. As a first step, therefore, the study coordinator must ensure that all field data including notes, comments, and recordings (if any) are recovered from the interviewers. The analysis can be done by hand or by computer depending upon researcher skill and the resources available to researcher. Most in-depth studies can easily be analyzed by hand though there are various computer programs that have been developed to assist this process. Many different strategies have been developed for analyzing the data from a series of in-depth interviews. A simple way of approaching the analysis involves the following steps: 1) The first step for in-depth interviews data analysis is creating a written text of the interviews. This step involves bringing together all of information-gathering approaches into one written form and categorizing interview material into various sub-topics. This is commonly described as the cut and paste process, and involves sorting out notes and transcriptions into the broad topics or sub-topics used in the guide, or adding any new themes from the interviews. This procedure ensures that "scattered pieces of information" on the same sub-topic are put together for a complete review . 2) The second step consists in writing out each question and response (verbatim) from the interview using the recorded audiotape and notes including side notes (observations, feelings and reflections). The side notes are differentiated from the respondent's notes, typically by highlighted text or labelling category using appropriate headings This important step involves determining the meaning in the information gathered in relation to the purpose of the study. If more questions are raised that need clarity in order to serve the purpose of the study, then another in-depth interview is warranted to examine the issue more thoroughly. Verifying involves checking the credibility and validity of the information gathered. A method called triangulation is used as a means of checks and balances. Basically, one type of triangulation would be to use multiple perspectives to interpret a single set of information. For example, if the researcher is studying the fathers communication with their children he have to interview the fathers, the children, and the spouse or partner-if applicable. If each one says basically the same thing, then the weight of evidence would be that the information is credible and valid. Another simple way to triangulate would be to have a colleague read the transcripts to see if she he/she came away with the same overall meaning. 3) The final step of the process is to describe and interpret the major findings and to share what you have learned from the in-depth interviews with other internal and external stakeholders. Analysis consists of considering responses in each topic as group, and drawing interpretive conclusions about commonly held beliefs, attitudes, or opinions. Implications for interventions should always be considered. It is also possible to report findings by the proportion of various sub-groups interviewed giving their reasons under each category, the apparent strength with which certain attitudes are held, or issues on which there is substantial difference of opinion. Sometimes a data sheet can be used to organize the analysis. A data sheet lists the major topics and sub-topics of the interview guide in order to record responses in a logical manner. Some reporting could be in the form of a formal written, others could be oral reports. CONCLUSION Quantitative and qualitative research can be used in the same survey, in fact with qualitative research is possible to understand behaviour and attitudes and gather preliminary information that will help to better define problems and suggest hypotheses and using quantitative is possible to measure how widespread these attitudes and behaviours are in order to allow statistical analysis. Often a research project to be complete needs of both approaches. Words Counted : 3.521 Bibliography guidelines [i] Luca Molteni, Gabriele Troilo, (2003) Marketing research , (Mc Graw Hill) [ii] Lisa Guion , Conducting an in-depth intervieew. University of Florida. Original publication date October 15, 2001. Revised January 2006 [iii] Kate Willis, (2004),The International Handbook of Market Research Techniques, London, (Robin J Birn) [iv] David Silverman, (2005) Doing qualitative research, London , (Sage Publications Ltd) [v] Catterall, M. and Maclaran, P. (1997) Focus Group Data and Qualitative Analysis Programs: Coding the Moving Picture as Well as the Snapshots' Sociological Research Online, vol. 2, no. 1 [vi] Bryman , A.& Burgess, R.G , (1994), ‘Reflections on Qualitative Data Analysis' in A. Bryman & R. G. Burgess (editors) Analyzing Qualitative Data. London, ( Routledge).
Simulated test marketing 1. INTRODUCTION According to the officially definition of American Marketing Association (1987) “marketing research help to identify and define marketing opportunities and problems; to generate, refine and evaluate marketing action, to monitor marketing performance; and to improve understanding of the marketing process”. Marketing research can be based on secondary data (information that already exist having been collected for another purpose) or on primary data (information collected for the specific purpose at hand). Secondary data can be obtained more quickly and at a lower cost than primary data but the data couldn’t be relevant or no so accurate, they could be not current or impartial. Data can be collected with quantitative or qualitative surveys. 2. QUANTITATIVE SURVEYS The term quantitative research covers a different range of techniques that can be used to quantify categories, evaluations, customer’s opinions and attitudes in the market. To develop a quantitative research it is important draw conclusions about a large groups of consumers by studying a small sample of the total consumer population. The sample is the segment of the population selected to represent the population as a whole. If well chosen sample of less than 1 per cent of a population can give good reliability.[i] The quantitative research can be of different types : Product Development (to determine if a new product idea fits a market need) , Pricing Research (to indicate the potential premium price that the product will support), Segmentation Research (to create a map of customer attributes that the marketer can use to divide a large market into identifiable customer cluster), Customer Satisfaction (to gauge how well they are doing in the marketplace), Advertising Research Measures (to monitor the communication impact both of your own advertising and that of a competitor) ; Test marketing (to understand the consumer reaction to new product initiatives and to forecast likely sales volume). 3. TEST MARKETING Test markets are used as a final confirmatory "go/no go" step before any large scale product introduction occurs, such as a regional or national launch. The difference with a product test is that in a market-test all elements of the marketing mix can be involved: packaging, pricing, product, promotion. In test marketing, sampling issues deal more with the number of stores or markets than respondents. It is possible to develop the test in different stores of the same chain, in the same market, or to realize "mini-launches" in limited geography area, or develop an electronic test markets (e.g., BehaviorScan from IRI) that provides store distribution services and media delivery (at the household level). Marketers monitor both retail movement and household purchasing via scanner panels to diagnose product performance (electronic test marketing firms provide household panel data for their designated test markets). The customers that will purchase the product will not have perception that it is a test then the collect data will be reliable. The timing to develop a marketing test depends upon objectives, it could be from 2-3 months to 6-12 months. This typology of quantitative survey is very expensive and it takes a long time, furthermore it can give ideas to competitors (and they could try to confound the test ) and it doesn’t explain the “reason why” of the sales data. 4. SIMULATED TEST MARKETING (STM) Created in the 1960s from the Yankelovich, Skelly and White Inc[ii]. STM has supplanted traditional in-market testing as the predominant method used by packaged goods manufacturers to evaluate new products, line extensions, and other new business opportunities prior to large-scale in-market introduction. According to Joseph Willke President, ACNielsen BASES (2003) it is an alternative to test marketing, which is slower, more expensive, and less secure. Simulated Test Market is a type of laboratory experiment that aims to imitate real life, where respondents are selected, interviewed and then observed making or discussing their purchases. It can bring to get mathematical models used to forecast factors such as awareness, trial, sales volumes, impact on other products etc. 4.1 METODOLOGY Simulated test marketing technology has evolved over time through a combination of methodologies for generating market response and mathematical models that simulate the marketing environment. By the 1970's, half a dozen systems or models were being used by research firms to simulate test markets, for example ASSESSOR, LITMUS, BASES, DESIGNOR, and Simulator ESP. The BaseII uses information from SAMI-Burke’s huge data base of new product introductions to obtain the estimates for the Awareness-Trial-Repeat progression. Key regularities are summarized into “category norms” that estimate the awareness in the target market generated by a given marketing plan, in “concept tables” that convert buying intention data and awareness data from consumers in the target market to an actual trial estimate and in “after-use tables” (from product tests) . The customer data are structured to state buying intentions, to present like/dislike scores, and to perceive price/value are converted into an estimate of the repeat purchase rate using these tables. BASES claims that”90% of their forecasts are within 20% of actual volume and over half are within 10%”. ASSESSOR has been developed by Silk and Urban at Sloan (MIT) and it has been described in detail in 1978 and earlier. ASSESSOR is a simulated test marketing procedure that compress the time and space of a traditional test market into a laboratory situation. Customers from the target population are exposed to advertising for the new brand through the “folder” method. Respondents then “shop” at a mock store set up, where the new brand and its competitors are available. They are given an amount of money to spend on their purchase, after a typical inter-purchase period there is a re-call those buying the new product and they get a chance to repeat purchase. Thus, trial and repeat, and hence long term share, are estimated given 100% awareness and availability. This estimate is adjusted (down) for the planned awareness and availability. ASSESSOR claims that average error is 21.5%; adjusting for actual introduction conditions, the error falls to 11.6%. Over the years, many aspects of the various models have converged. The result of this combination is a reliable and valid methodology for forecasting awareness, penetration, share, and volume for new and repositioned products and services. At a minimum, the sales projection requires two concept statements: one for a product (probably competitive) that is already on the market and whose sales history can serve as a control; the other will be for the test product for which a sales projection is sought. Each concept should be complete with the name, positioning, packaging, features and price. Ideally there will be more than one control product and at least one control will not be in national distribution. This allows both the test and control products to be evaluated in regions or markets in which shoppers are not familiar with either. In seeking congruence between measures, e.g., of trial, it is important that there be no extraneous biases favoring one or another concept. Each must be a fair representation of the competition in the marketplace, and for this reason the impact of a familiar brand must be considered. If product is going to be made available for either sampling or in-home use, it needs to reflect fairly what will be the competitive marketplace, as well as congruence in packaging. Consumer input begins in selected stores (e.g. supermarkets) where introduction of concepts and interviewing occurs in the category section. In this way, all candidates for the study are automatically pre-screened as category shoppers. After verification of category usage, brand share and frequency, shoppers are introduced to the concepts. The presentation may be of finished packages on the shelf, mock-ups, a competitive display board or individual (rotated) concept cards. Respondents then state their future purchase interest in the test product and its key competitor (controls), as well as evaluating critical attributes. This purchase interest data can be buttressed by the use of “chip allocations” to represent projected purchases. The usage of the product by the consumer is essential to incorporate concept fulfillment, additionally data on the purchase cycle, attitudinal, behavioural and demographic information is collected to allow profiles of probable triers and rejecters, in particular if adequate amounts of test product are not available for usage in a trial (typically 200-500 packages). For the home-use phase, single samples of either the test or control product(s) are sent home with shoppers who profess that they will “definitely” or “probably” purchase, based on their evaluation of the concept. Multiple samples including both the test and control products may be sent home, particularly if a third wave of interviews, the “sales wave” is to be conducted. The call-back interview for the home use phase is conducted long enough after placement to allow, on average, half of the product to be consumed. The shoppers will have a diary for use in evaluating the product and when contacted by telephone will report on repeat purchase interest, purchase cycle and diagnostic information. Other data collected will be the amount of product used, family/household member usage, occasions for use, anticipated purchase volume, etc. In addition to the first call back interview, further interviews may subsequently occur in one or more waves, designated as “sales wave interviews.” Requestioning as to purchase quantity and interest may occur. But, more importantly, shoppers are offered the opportunity to purchase products at the regular retail price. The interviewer takes orders and fulfills the order with a letter and monetary compensation; or alternatively may ship the products. In order to evaluate the purchase probability and to predicting likely market response it is a right way using a scale superior to traditional 3-, 5-, or 7-point. This typology of behaviour probability scale was developed by Dr. Thomas Juster and published in different forms during the 1960's. It couples word meanings with probability estimates to enhance serious thinking. After this scale has been employed and validated by marketing research companies such as Yankelovich Partners since the early 1960's in numerous consumer behaviour studies including packaged goods, durables, financial products and services, and other categories. An eleven-point purchase intent scale better predicts real world behaviour, especially for mixed and high-involvement decisions. Yet even this 11-point scale overstates the actual purchasing that takes place. People don't do exactly what they say. The relation between saying and doing is the foundation of simulated test marketing. For one thing, the research environment assumes 100% awareness and 100% distribution—in other words, all are aware of it and able to find it easily—which never happens in the real world. Even taking this into consideration, the overstatement problem remains. Each of the eleven points is coupled with a verbal anchor from: "Certain will buy‑‑99 chances in 100" to "No chance will buy‑‑zero chances in 100.” Usually no more than 75% of the people who claim they definitely will buy actually do so. This figure declines as self-reported purchase probability declines, but the ratio is not constant. This leads to a set of adjustments for each level of self-report, which converts questionnaire ratings into estimates of likely behaviour. These adjustments, as an aside, vary by the consumer's (or industrial buyer's) level of "involvement" in a category. The higher their involvement, the more faith we can have in what people say and the lower the need for overstatement adjustment. Needless to say, by taking purchase probabilities and involvement into account, it's possible to produce a reasonably valid estimate of actual sales (i.e., the percentage of consumers who would buy the product at least once). 4.2. FLAWS OR WEAKNESS There are some significant flaws or weaknesses in the current STM process. The conventional STM is strongly dependent on a marketing plan that is largely driven by mass media. For many manufacturers, mass media is not driving their sales. Rather, in-store factors such as packaging, shelf-configuration, in-store promotions, and the whole range of issues subsumed under “category management” are the relevant drivers. STM’s, whether BASES, LITMUS, or any of the others were designed to model a market structure that does not obtain today. The standards and norms developed for STM’s describe a bygone market. In any event, they have always played a lesser role in the sales projection process than commonly thought by clients. Judgment and other factors were far more central to the projection. Rather than being an asset to legitimately exploit in the new millennium, they are components of a rear view mirror, looking back to an earlier marketing environment. The conventional STM requires independent determination of a lot of measures. These must then be assembled together by addition or multiplication, with each measure being adjusted with normative factors to produce a final number that, hopefully, will reflect the real world at some future time a year or two away. Unfortunately, each measure has its own error which must also be added or multiplied to produce the final error. 4.3 FOCUS ON INTERNET USE Recently some marketers have begun to use interesting new high tech approaches to simulated test market research, such as virtual reality and the Internet.[iii] Virtual reality could be the wave of the future for simulated test-marketing .Some Research Company have develop research tools (Simul Shop, VisionDome) to re-creates shopping situations in which researcher can test consumers’ reactions to such factor as packaging, store layout, product positioning, sounds and videos. This tool has several advantages: it is relatively inexpensive, it has flexibility : it is possible to create a large number of simulated surrounding for several different products with several different forms, sizes, colours , sound. Several people can work together via computer and it is possible to use a single approach to evaluate products and programmes worldwide. However this tool has also limitations infact simulated shopping situation never quite match the real life. "Only research firms with true internet panels maintain detailed demographic (and other) information about their panel members and balance their samples so they match U.S. Census statistics. Without this balance, t here is a very high risk survey results will not measure what they are supposed to and lack study-to-study consistency." According to Greg McMahon, senior vice president Synovate (2003) “the average response rate to a web survey is less than 1%, making it even less likely to get a reliable read on the potential of a new product.” 4.4 THE SIMULATED TEST MARKETS’S FUTURE The development of a one-to-one marketing environment challenges all of today’s research techniques, and Simulated Test Markets (STMs) are no exception. We are observing the steady erosion of the mass-marketing world and the emergence of the one-to-one world. Over the next decade, the marketing research industry as a whole (and STMs in particular) will need to develop tools that lead to better and more detailed understanding of small consumer segments, evaluation of individual components of an initiative (rather than the initiative as a whole), meaningful analysis of business performance over shorter time periods, and insights that are unique to individual business partners. To avoid to become less and less reliable STMs will need to forecast entirely at the individual-level, not just trial or repeat probabilities, to allow for different marketing plans for each individual and to estimate different promotional and advertising elasticities for each person.[iv] STMs will need to forecast at the weekly level to assist in production planning and inventory control. STMs will need to employ much larger sample sizes to provide estimates of targets and key segments. This will allow manufacturers to tailor one-to-one marketing plans. STMs will need to optimize marketing plans, first by estimating the contribution of all marketing elements to total sales, then by examining the ROI of each marketing plan element. 5. CONCLUSION Yet because of their small samples and simulated shopping environment many marketers do no think that simulated test markets are as accurate or reliable as larger , real world tests but according to data the products that have had success in the STM have 80% probability to have success in the market[v]. The question is under discussion infact the leading provider of STM research claim that about 90% of our forecasts are within ±20%, manufacturer clients, however, assert that only 52% of STMs were confirmed by in-market results and a whopping 41% had sales lower than predicted.[vi] Simulated test market are used widely often as a pre-test markets. They overcome some of the disadvantages of standard and controlled test markets : are fast and inexpensive, can be run to assess quickly a new product or its marketing programme. If the pre-test results are strongly positive, the product might be introduced without further testing. If the results are very poor , the product might be dropped or substantially redesigned and retested. The resulting forecast, usually delivered with a range of error of ±20 percent, provides the manufacturer with an estimate of the size of the new business opportunity, which can then be used to evaluate its potential profitability before committing to the investment required for a full-scale market launch.[vii] Word Count : 2812 Bibliography guidelines [i] Armstrong, G. Kotler,P. Saunders. J, Wong .V (2002) Principles of Marketing, (Prentice Hall, Financial Times) [ii] Gian Luca Marzocchi,Elisa Montaguti (2003) Le Ricerche per il lancio di nuovi prodotti (Bologna) [iii] Armstrong, G. Kotler,P. Saunders. J, Wong .V (2002) Principles of Marketing, (Prentice Hall, Financial Times) [iv] Joseph Willke President ACNielsen BASES (2003) “The Coming Obsolescence of Current Models and the Characteristics of Models of the Future” [v] Luca Molteni, Gabriele Troilo, (2003) Marketing research , (Mc Graw Hill) [vi] The Soresen In-Store Sales Forecast , (1999) AppendixII, Portland [vii] Jim Miller Senior Vice President, ACNielsen BASES “Global Research: Evaluating new products globally; Using consumer research to predict success (Part 1) “