BRFSS Technical Notes
- Data Analysis
- Population Density Groups
- Questionnaire Design
- Types of Questions
- Quality Control
During 1992-1998, the Kansas Behavioral Risk Factor Surveillance System (BRFSS) was conducted using a simple random sampling method. In this method of sampling, each telephone number in the population has an equal probability of being called. The simple random sample is created by combining the known area codes and prefixes in the surveillance area with randomly generated suffixes.
From 1999-2001 & 2003-2008, the Kansas BRFSS was conducted using disproportionate stratified sampling methodology that considers the entire state as a single geographical stratum. This method of probability sampling involved assigning sets of one hundred telephone numbers with the same area code, prefix, and first two digits of the suffix and all possible combinations of the last two digits ("hundred blocks") into two strata. Those hundred blocks that have at least one known household number are designated high density (also called "one-plus blocks"); hundred blocks with no known household numbers are designated low density ("zero blocks"). The high density stratum is sampled at a rate 1.5 times higher than the low density stratum, resulting in greater efficiency.
In 2002, the sampling method was slightly modified. The survey was conducted using disproportionate stratified sampling methodology that considers the entire state as a single geographical stratum as in the earlier years but the probability sampling for assigning set of telephone number consisted of three strata: listed one-plus block numbers, not listed one-plus block numbers, and zero block numbers. Not listed one-plus numbers are sampled at two-thirds the rate of listed numbers; zero block numbers are sampled at one-fifth the rate of listed numbers. The sampling was changed to increase survey efficiency.
Beginning in 2009, the sampling method was modified by implementation of disproportionate stratified sampling methodology that included selection of land line telephone numbers within 10 geographic strata comprised of county grouping instead of random selection of telephone numbers from the entire state as a single geographic stratum. These 10 geographical strata include; Johnson county, Sedgwick county, Shawnee county, Wyandotte county, Northwest public health district, Southwest public health district, North Central public health district, South Central public health district excluding Sedgwick county, Northeast public health district excluding Johnson, Shawnee and Wyandotte counties, and Southeast public health district. The sample that is drawn from each geographical stratum is based on population size within each geographical stratum, the confidence level and the margin of error. This is a methodology that is commonly used to target collection for geographically identifiable su bpopulations, for example people in rural areas. It also increases the accuracy of prevalence estimates for a small subpopulation. This modification in the sampling methodology of the 2009 and future Kansas BRFSS is made to address the need to collect adequate sample to provide local or county level data. These data are needed to determine priority health issues, to identify population subgroups at higher risk of illness, and to monitor the health status of local communities. This goal can be achieved by providing BRFSS data for the individual counties (counties with bigger population sizes) and for bioterrorism regions. As in previous years, this method of probability sampling involved assigning sets of one hundred telephone numbers with the same area code, prefix, and first two digits of the suffix and all possible combinations of the last two digits ("hundred blocks") into two strata. Those hundred blocks that have at least one known household number are designated high density (also called "one-p lus blocks"); hundred blocks with no known household numbers are designated low density ("zero blocks"). The high density stratum is sampled at a rate 1.5 times higher than the low density stratum, resulting in greater efficiency.
Approximately the same number of persons is called each month throughout each calendar year to reduce bias caused by seasonal variation of health risk behaviors. Potential working telephone numbers are dialed during three separate calling periods (daytime, evening, and weekends) for a total of 15 call attempts before being replaced. Upon reaching a valid household number, one household member ages 18 years or older is randomly selected. If the selected respondent is not available, an appointment is made to call at a later time or date. Because respondents are selected at random and no identifying information is solicited, all responses to this survey are anonymous.
In 2010, the landline telephone survey used the survey methodology identical to that of 2009 survey.
Changes in Kansas BRFSS Survey Sampling Methodology: From 2011 onwards, a major change has been made in the sampling methodology of the Kansas BRFSS. This change is instituted to comply with the guidelines provided by the CDC for 2011 survey. From 2011 onwards, a dual frame sampling methodology (landline telephone sample and cellular telephone sample) will be used for Kansas BRFSS instead of single frame methodology (landline telephone sample).
In 2011, the CDC advised all states and territories to implement a dual frame sampling methodology for BRFSS survey and to include both: adults 18 years and older living in private residences with landline telephone service; and adults 18 years and older living in private residences with cellular telephone only service. The states were advised to target at least 20 percent of their total sample of complete interviews to be from cellular telephone only service households. This change in sampling methodology of the BRFSS is made to address the impact of growing number of households with cellular telephone only service and differences in the demographic profile of the people who live in cellular telephone only service households and to maintain representativeness, coverage, and validity of BRFSS data.1 To be in compliance with the current guidelines regarding BRFSS sampling methodology, Kansas BRFSS program has implemented dual frame sampling methodology for 2011 Kansa s BRFSS survey.
The dual frame sampling methodology for 2011 survey included two components: 1) Landline telephone service survey component; and 2) Cellular telephone only service component. The landline telephone survey component of this dual frame sampling method remained identical to the sampling method for 2009 and 2010 surveys. It comprised of implementation of disproportionate stratified sampling methodology that included selection of landline telephone numbers within 10 geographic strata comprised of county grouping instead of random selection of telephone numbers from the entire state as a single geographic stratum. These 10 geographical strata include; Johnson county, Sedgwick county, Shawnee county, Wyandotte county, Northwest public health district, Southwest public health district, North Central public health district, South Central public health district excluding Sedgwick county, Northeast public health district excluding Johnson, Shawnee and Wyandotte counties, and Southeast public health district. The sample that is drawn from each geographical stratum is based on population size within each geographical stratum, the confidence level and the margin of error. The landline telephone component sampling was designed to reach non-institutionalized adults ages 18 years and older living in the private residences in Kansas. As in previous years, this method of probability sampling involved assigning sets of one hundred telephone numbers with the same area code, prefix, and first two digits of the suffix and all possible combinations of the last two digits ("hundred blocks") into two strata. Those hundred blocks that have at least one known household number are designated high density (also called "one-plus blocks"); hundred blocks with no known household numbers are designated low density ("zero blocks"). The high density stratum is sampled at a rate 1.5 times higher than the low density stratum, resulting in greater efficiency. The cellular telephone survey component of this dual frame sampling method included the sampling frame comprised of all 1000-series blocks dedicated to cellular devices serving the state with a nonzero chance of inclusion. The cellular telephone survey component sampling was designed to reach non-institutionalized adults ages 18 years and older living in the private residences with cellular telephone only service in Kansas.
In 2012, the CDC further advised all states to make two additional changes in the 2012 BRFSS methodology. These additional changes included: 1) inclusion of respondents living in households with both cell phone and landline service but receiving 90 percent or more of their calls on cell phones (cellular telephone mostly households) in cell phone survey sample thus addressing the impact of increased use of cell phones in households with dual telephone service (cell phone mostly households) in addition to the impact of growing number of households with cellular telephone only service and differences in the demographic profile of the people who live in cellular telephone only service households to further maintain representativeness, coverage, and validity of BRFSS data; and 2) inclusion of residents living in college housing with landline and/or cellular telephone service in both landline and cell phone samples. These changes in the BRFSS survey sampling methodology will allow inclusion of respondents from the cellular telephone mostly households, as well as respondents living in the college housing, thus making the survey sample more representative of the general population.
Also, in 2012 survey, for landline telephone sample, landline service over the internet was counted as landline service. This included Vonage, Magic Jack and other home-based phone services. Besides these changes, sampling methodology for landline and cellular phone components for 2012 survey was same as 2011 survey.
From 2000-2003 Kansas BRFSS survey sample size was about 4,000 respondents and from 2004-2008 it was about 8,000 respondents.
The target sample size in odd numbered years beginning in 2009 is 16,000 complete interviews. The target sample size in even numbered years will remain 8,000. The target sample for 2009 survey was 16,000 complete interviews; and for 2010 survey was 8,000 complete interviews.
For 2011 Kansas BRFSS survey, the target total (combined landline and cell phone sample) sample size was about 19,200 respondents with a target of 16,000 respondents for the landline telephone survey component and 3,200 respondents for the cellular telephone survey component.
For 2012 Kansas BRFSS survey, the target total (combined landline and cell phone sample) sample size was about 10,000 respondents with a target of 8,000 respondents for the landline telephone survey component and 2,000 respondents for the cellular telephone survey component (20% of the state's total combined landline and cell phone sample).
Data weighting is an important statistical process that attempts to remove bias in the sample. It corrects for differences in the probability of selection due to non-response and non-coverage errors. It adjusts variables of age and gender between the sample and the entire population. Data weighting also allows the generalization of findings to the whole population, not just those who respond to the survey.
Once BRFSS data are collected, statistical procedures are undertaken to make sure the estimates of health indicators generated by the analysis of survey data are representative of the population for each state and/or local area.
This weighting process of BRFSS data includes calculation of design weight as one of its components: In BRFSS survey, the design factors that affect weighting include; number of residential telephones in household, number of adults in household and geographic or density stratification. The formula for calculation of design weight is:
Design weight = STRWT * 1 OVER IMPNPH * NUMADULT
Weighting process of BRFSS also involves adjustment for the distribution of the sample data so that it reflects more accurately the total population of the sampled area. The method used to for this adjustment till 2010 is called the post-stratification methods. This method involves calculation of post-stratification factor by computing the ratio of the age, race, and sex distribution of the state population divided by that of the sample.
This post stratification factor is then multiplied by the design weight to compute an adjusted, final weight variable. Thus the weighting process adjusts not only for variation in selection and sampling probability but also for demographic characteristics so that projections can be made from the sample to the general population. The computational formula below is intended to reflect all the possible factors that could be taken into account in weighting a state's data till 2010. If a factor does not apply, its value is set to one.
The formula for weighting using post-stratification method:
FINALWT = Design WT * POSTSTR
FINALWT = STRWT * 1 OVER IMPNPH * NUMADULT * POSTSTR.
Final weight variable is the use for analysis of survey data to generate estimates of health indicators for general population.
Additional facts about data weighting are:
- Weighting consists of a lot more than post-stratification.
- Weighting for design factors has more of an effect on final results than does post-stratification.
- Weighting for design factors is also more important conceptually.
- Weighting affects both the point estimate (bias) and confidence intervals (precision).
Application of the weighting process allows comparability of data. However weighting can only be performed when the sampling methodology is carefully controlled.
New Weighting Methodology: Iterative Proportional Fitting or Raking
Since 1980s, as mentioned above the CDC has used "post stratification statistical method" to weight BRFSS survey data to simultaneously adjust survey respondent data to known proportions of age, race and ethnicity, gender, geographic region, or other known characteristics of a population. This type of weighting is important because it makes the sample more representative of the population and adjusts for non-response bias. In 2006, in accordance with the recommendations of the expert panel of survey methodologists, CDC began testing a more sophisticated weighting method called "iterative proportional fitting", or "raking".
Raking method adjust the data so that groups which are underrepresented in the sample can be accurately represented in the final dataset. Raking allows for the incorporation of cell phone survey data, permits the introduction of additional demographic characteristics and more accurately matches sample distributions to known demographic characteristics of populations. The use of raking method reduces non-response bias and has been shown to reduce error within estimates.
Raking has several advantages over post stratification. First, it allows the introduction of more demographic variables suggested by the BRFSS expert panel such as education level, marital status, and home ownership into the statistical weighting process than would have been possible with post stratification. This advantage further reduces the potential for bias and increases the representativeness of estimates. Second, raking allows for the incorporation of a now crucial variable telephone source (landline or cellular telephone) into the BRFSS weighting methodology.
Beginning with the 2011 dataset, the CDC has adopted raking or method in place of post stratification weighting procedure as the sole BRFSS statistical weighting method.
The new BRFSS weighting methodology is comprised of two components:
- Design Weight
- Raking Adjustment
Design Weight: Design Weight is calculated by using computational formula:
Design Weight = _STRWT * (1/NUMPHON2) * NUMADULT
- The stratum weight (_STRWT) is calculated using:
- Number of available records (NRECSTR) and the number of records selected (NRECSEL) within each geographic strata (_GEOSTR) and density strata (_DENSTR);
- Geographic strata (entire state, counties, census tracts, etc.); and
- Density strata (1=listed numbers, 2=not listed numbers).
- Within each _GEOSTR *_DENSTR combination: The stratum weight (_STRWT)is calculated from the average of the NRECSTR and the sum of all sample records used to produce the NRECSEL.
The computational formula for stratum weight:
STRWT = NRECSTR / NRECSEL
- 1/ NUMPHON2 is the inverse of the number of residential telephone numbers in the respondent's household.
- NUMADULT is the number of adults 18 years and older in the respondent's household.
Final Weight is calculated for analysis of survey data to generate estimates for health indicators that are representative of the general population.
The computational formula for Final weight:
Final Weight = Design Weight * Raking Adjustment
Raking adjustment: Raking adjusts estimates within each state by using:
- Telephone source,
- Detailed race and ethnicity,
- Regions within state,
- Education level,
- Marital status,
- Age group by gender,
- Gender by race and ethnicity,
- Age group by race and ethnicity, and
- Renter/homeowner status.
Raking is completed by adjusting for one demographic variable (or dimension) at a time. For example, when weighting by age and gender, weights would first be adjusted for gender groups, then those estimates would be adjusted by age groups. This procedure would continue in an iterative process until all group proportions in the sample approach those of the population, or after 75 iterations.
Weighted data analysis techniques are used to analyze BRFSS survey to generate population based estimates of health indicators. The Final weight variable is used in these analyses.
Weight Trimming in Raking
Weight trimming is used to increase the value of extremely low weights and decrease the value of extremely high weights. The objective of weight trimming is to reduce errors in the outcome estimates caused by unusually high or low weights in some categories.
Source: Above description (language) on "New Weighting Methodology" is provided to the state BRFSS programs through the factsheets titled Behavioral Risk Factor surveillance System (BRFSS) Fact Sheet: Raking (PDF) and Behavioral Risk Factor Surveillance System Improving Survey Methodology prepared by the Public Health Surveillance Program Office and Division of Behavioral Surveillance, Office of Surveillance, Epidemiology and Laboratory Services, Centers of Disease Control and Prevention.
The new survey methodology, including dual frame sampling and the iterative proportional fitting (IPF) or raking method, was used starting with the 2011 data. Therefore, DO NOT COMPARE 2011 to present data with previous years.
Telephone interviewing has been demonstrated to be a reliable method for collecting behavioral risk data and can cost three to four times less than other interviewing methods such as mail-in interviews or face-to-face interviews. The BRFSS methodology has been utilized and evaluated by the CDC and other participating states since 1984. Content of survey questions, questionnaire design, data collection procedures, surveying techniques, and editing procedures have been thoroughly evaluated to maintain overall data quality and to lessen the potential for bias within the population sample.
The weighted data analysis is conducted to estimate overall prevalence of the risk factors, diseases and behaviors among adults 18 years and older in Kansas. On some questions which pertain to a particular topic, only respondents who responded in a specific way [subpopulation] on an initial question continue to the next question. Though the subsequent question is asked from those respondents who responded in a particular manner on initial question, analysis for the subsequent question is based on the denominator that includes all respondents who responded to the initial question (in any manner). Therefore, the presented results are on all respondents vs. the subpopulation. Questions which have this approach applied are indicated with the statement "Denominator adjusted to represent the prevalence in the overall population". In addition to overall prevalence estimates, stratified analyses are also conducted to examine burden of a public health issue within different population su bgroups based on socio-demographic factors, risk behaviors and co-morbid conditions. In addition, data analysis is also conducted using population density groups. The definition and designations of these groups are described in the Population Density Groups tab.
As 2010 Census data are available in 2011, therefore 2010 Census estimates were used to classify 105 counties in 5 population density peer group categories for the data analysis of the 2011 BRFSS survey as shown in following table. This classification will be used for these analyses for the BRFSS surveys conducted in subsequent years.
Definition of Designations
Number of Counties
Less than 6 persons per square mile
6 to less than 20 persons per square mile
20 to less than 40 persons per square mile
40 to less than 150 persons per square mile
150 + persons per square mile
Land Area Square Miles
Pop. Density Persons Per Square Mile
The weighted data analysis techniques applied for the analysis of 2011 survey data are same as in previous years. Adoption of new survey methodology for 2011 and subsequent years will not affect the analytical approach for BRFSS data analyses to generate estimates of the health indicators.
The BRFSS survey conducted by all states consists of a core section and optional modules/state-added questions section. The Core section of the survey is consistent across all states as this section includes questions prescribed by the CDC. The optional modules are selected by the states from a bank of CDC-supported modules, or each state designs its own modules (state-added modules). Kansas BRFSS use a split questionnaire design. It consists of the core section, which is asked of all respondents and then survey splits into two “branches” of optional modules/state-added modules. Once respondents have been asked the core questions, they will either be asked questions in questionnaire A (also called Part A) or questionnaire B (also called Part B) of the survey. Respondents will be randomly assigned to one of these two arms of the survey. Approximately half of the respondents receive questionnaire A and the remaining will receive questionnaire B.
Advantages of a split questionnaire:
- Collect data on numerous topics within one data year
- Collect in-depth data on one specific topic
- Ability to keep questionnaire time and length to a minimum
Disadvantages of a split questionnaire:
- Complexity of data weighting; additional weighting factors are needed
- Variables on questionnaire A cannot be analyzed with variables on questionnaire B
Analysis of split questionnaire:
The sample size for each split of the questionnaire is approximately half of the total sample size. As mentioned above, each respondent is randomly assigned to questionnaire A or to questionnaire B. The questions regarding certain conditions are included in the core section (e.g., asthma, disability, high blood pressures, etc.). State added questions and optional modules for these conditions are included on questionnaire A or questionnaire B. Therefore, these additional questions on a specific health condition are asked to the respondents who are assigned to that particular split questionnaire. This results in approximately half of the respondents who have a particular condition from the core section respond to additional questions on the specific condition.
Also, the number of adults with the specific health condition may vary on each question due to respondents terminating at various points in the survey. A split questionnaire was used for the following surveys: 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 and 2012.
The BRFSS questionnaire is designed by the Centers for Disease Control and Prevention, state BRFSS Coordinators, and each individual state’s survey selection committee. The questionnaire has three components: core questions, optional modules, and state added questions.
- Core questions are asked by all states and include approximately 72 questions (though this may vary somewhat from year to year). The order the questions appear and the wording of the question is exactly the same in all states. Types of core questions include fixed, rotating, and emerging health issues.
- Fixed core contains questions that are asked every year. Fixed core topics include health status, health care access, healthy days, life satisfaction emotional satisfaction, disability, tobacco use, alcohol use, exercise, immunization, HIV/AIDS, diabetes, asthma, and cardiovascular disease. Total number of fixed core questions is 52.
- Rotating core contains questions asked every other year.
- Odd years (2005, 2007, 2009, etc): fruits and vegetables, hypertension awareness, cholesterol awareness, arthritis burden, and physical activity. Total number of rotating core questions for odd years is 72.
- Even years (2006, 2008, 2010, etc): women's health, prostate screening, colorectal cancer screening, oral health and injury. Total number of rotating core questions for even years is 74 for female respondents, and 72 for male respondents.
- Emerging Health Issues: contains late breaking health issue questions. At the end of the survey year, these questions are evaluated to determine if they should be a part of the fixed core. Total number of questions for emerging health issues is four.
- Optional Modules include questions on a specific health topic. The CDC provides a pool of questions from which states may select. States have the option of adding these questions to their survey. The CDC's responsibilities regarding these questions include development of questions, cognitive testing, financial support to states to include these questions on their questionnaire, data management, limited analysis and quality control.
- State-added questions are based on public health needs of each state. State added questions include questions not available as supported optional modules in that year or emerging health issues that are specific to each state. Any modifications made to the CDC support modules available in that year make the module a state added module. The CDC has no responsibilities regarding these questions.
The BRFSS survey sampling methods are discussed in the methodology section. Sampling yields results which are an estimate of the true answer for the entire population. The higher the number of persons interviewed, the greater the precision of the estimate. When the data are subdivided to look at sub-populations (e.g., an age subgroup) these estimates will be less precise; if the number of persons interviewed was small because the subgroup represents a small fraction of the population (e.g., diabetics less than 30 years old), the estimate may become too uncertain to be of value.
Because the survey is conducted by telephone, persons without telephones could not be reached. Since phone ownership is highly correlated to income, persons without a phone are more likely to have low incomes than persons with a telephone. This will potentially affect questions with responses that are highly dependent on income (e.g., health insurance) more than other questions. However, because phone ownership is high in Kansas (greater than 95%), it is unlikely that failing to reach these persons will substantially alter results.
From 2011 onwards, inclusion of cellular telephone only service (and cellular telephone mostly service) households in addition to landline telephone service households will further assist in maintaining the representativeness of the survey sample to the general population.
How a question is written and which questions preceded it in the questionnaire can influence responses in unpredictable ways. Not all the questions used in the survey have been tested to ensure that all persons understand the intended meaning. Those that come from modules created by the Centers for Disease Control and Prevention usually have been tested, while those in state modules may or may not have been tested, depending on the source of the question. Furthermore, not all questions are equally easy for respondents to answer. While it may be easy for a respondent to provide a personal opinion, it may be much harder to recall a past event (last mammogram) or provide factual information (household income).
Interviewers are trained and monitored to ensure that they administer the survey in a neutral voice and read the written question verbatim and without comment. Nonetheless, it is possible for the interviewer to bias the results through tone of voice or administration technique. Coding errors may also occur if the interviewer types in the wrong response to the question. In addition, the person being interviewed may alter his or her response to give the interviewer the most socially acceptable answer. This may be a problem especially for questions which may have a perceived stigma (e.g., HIV risk).
The bias from non-response cannot be removed and it is not possible to know if those who refused to respond would have answered the questions in approximately the same ways as those who responded.
Confounding and Causation
Personal characteristics which are presented on this web site are univariate (i.e., examine each risk factor in relationship to only one characteristic at a time); however, the complexity of health associations are not fully represented by examining single relationships. For example, an examination of heart disease and employment status might show a greater prevalence of heart disease among persons who are retired than among persons who are employed. However, persons who are retired are expected to have a greater average age than persons who are employed; consequently, this relationship might entirely disappear if we removed the effects of age. (If this were the case we would say that the relationship between heart disease and employment status was being confounded by age.)
Likewise, this web site does not attempt to explain the causes of the health effects examined. For instance, BRFSS data might show a higher prevalence of heart disease among smokers, but one should not conclude from this that smoking causes heart disease. That smoking is indeed a causal factor for heart disease is apparent from a large body of scientific data, but that is not a conclusion that can be drawn from a cross-sectional survey such as this. Rather this is a "snapshot" of disease, risk factors, and population characteristics for adult residents of Kansas at a point in time.
Quality control issues are very important to the Kansas Behavioral Risk Factor Surveillance (BRFSS) program. The data we collect must be as accurate as possible to ensure that it represents the self-reported health and beliefs of the people of Kansas. To provide consistent and timely quality control, a number of tasks are performed before and during the data collection process, and after data collection is complete. First, prior to data collection, interviewers are extensively trained. Newly hired interviewers participate in a two-day, eight-hour online training developed by the Centers for Disease Control and Prevention (CDC) and located on CDC's website at www.cdc.gov/brfss/training.htm. This BRFSS Interviewer Training is interactive and includes computer testing after each section is completed.
Prior to implementation of each new survey, the Survey Supervisor trains the interviewers on the actual content of the survey. Each survey is reviewed question by question to ensure that there is understanding of the questions being asked, the possible responses to each question, and proper pronunciation. This is followed by a debriefing session to address any questions or concerns that the interviewers may have before they begin interviewing.
Each new interviewer is then assigned to an experienced interviewer for further training. The trainee interviewers sit with the experienced interviewers as they make phone calls so they can experience what a night is like as an interviewer and observe the complications that may arise while they are interviewing on their own. This is a peer training and allows to the trainees to see and hear "live" interviews and ask specific questions that may be unique to each individual interview.
Additionally, each survey has a test study where the trainees can practice the survey without actually being on the telephone. Interviewers usually practice on a test study for several days before they are ready to call respondents and conduct a "real" interview. This allows them to see the questions on the computer monitor and to get acquainted with the different skip patterns and response categories that are often unique to each individual survey. During this time, new interviewers are encouraged to use each other as mock respondents so they can become familiar with using the telephone and the computer at the same time. Finally, the trainee interviewers meet with their trainer and they discuss any questions or concerns they may have before going "live". Once the trainer and the trainee interviewer feel the trainee is ready, he/she is assigned a station and begins calling on the actual study.
Second, the Survey Supervisor performs interviewer monitoring monthly. The monitoring consists of the Survey Supervisor listening in on each interviewer as they perform a survey. The Survey Supervisor completes a monitoring form that is based on the BRFSS User's Guide recommendations. This form is then shared with the interviewer as a performance review. This gives the interviewer immediate feedback regarding attitude, interviewing techniques and compliance with BRFSS protocol. Any concerns, questions, or retraining issues are discussed with the interviewer. After the form is discussed, both the Survey Supervisor and the Interviewer sign the form, showing that it has been reviewed and any issues have been discussed. On the average, each Interviewer is monitored at least once per month. New interviewers are monitored more frequently to ensure that they are following CDC's BRFSS protocol. The monitoring forms are kept confidential and are tracked on a monthly spreadsheet.
Third, the data is collected in-house rather than contracted out to a research firm. This allows Kansas BRFSS staff to perform frequent reviews of the data to ensure that it is being collected and stored correctly. If errors are detected any time during the month the data can be reviewed and the corrections made quickly without major interruptions to the data collection process or extensive corruption of the data. In-house data collection allows for more reactive and timely response to protocol infractions and data errors.
Fourth, the data is edited on a monthly basis. The CDC has developed a data-editing program used specifically for the BRFSS. Each month the data is forwarded to the CDC for review. This allows us to identify and correct any data errors and/or outliers in a timely manner.
The processes we have implemented and the CDC's BRFSS protocol are important to ensure quality control. Training, monitoring, in-house data collection and data editing allows for us to ensure that the sample we collect is of the best quality possible. This is an on-going task, which helps to ensure that the BRFSS data are highly representative of the health behaviors and opinions of the Kansas people.