Troy McAlister (email@example.com) is an ethics and compliance professional with international experience designing, implementing, monitoring, and assessing compliance programs, based in Woodlands, Texas, USA.
Compliance professionals are constantly looking for ways to accurately assess the effectiveness of their program’s individual elements. Some elements leave distinct audit trails, such as third-party vetting or gifts and entertainment, which can be tested and audited. With other areas, such as strategic planning or internal investigations, you may be so heavily involved, you already know where the successes live and the skeletons are buried. Then there are concepts such as culture, willingness to report concerns, or simply knowing whom to talk to and where to find company resources, which you have to get feedback from across your employee base to really determine where you stand.
Budget belt-tightening and heavy restrictions on travel in the age of COVID-19 mean gathering feedback via face-to-face discussions may not be a viable option. Further, don’t expect to get honest and forthcoming feedback during a choppy multiperson Zoom meeting. Here is where a simple tool like an employee survey can be one of your biggest sources of information while consuming only a minimal amount of time and resources.
Drafting your survey
Writing a survey may seem simple and straightforward; however, there is somewhat of an art to drafting survey questions that can yield the most valuable result. Not every compliance department has the budget or access to third-party specialists who write surveys for a living, and in the age of doing more with less, compliance professionals are forced to dip their toes into the unfamiliar and uncomfortable. So, for those who find themselves venturing out of their comfort zones—or find themselves getting unexpected, unusual, or unproductive results—I humbly submit some tips to consider during this process. (For the survey specialists and occupational psychologists, you can turn away now!)
Surveys can be a repetitive or ad hoc process. For the purposes of this discussion, I am focused on an annual survey where you are able to incorporate the results in your annual compliance program assessment process and as an input for your plans in the coming year (though many comments are beneficial to ad hoc surveys as well). This strengthens your argument that you are continuously monitoring and evolving your program as emphasized in the latest Department of Justice and Securities and Exchange Commission guidance. Once distributed, a survey can take 4–8 weeks to receive a sufficient number of responses (more on this later) and several days or weeks more of analyzing the results. So, plan accordingly based on when you perform those processes.
Topics to cover
Good recurring topics include:
Tone and ethical behavior of management. If you’re in a matrixed or decentralized organization, you may want to distinguish between executive management and business unit/location management. The further your employees get away from top executives, the more likely local management are the ones who set the tone— whether they realize it or not.
Awareness of, ability to find, and expectation to follow the code of conduct or company policies and procedures.
Pressure to violate policies and procedures or laws and regulations in order to meet financial targets.
Effectiveness of training programs, including time commitments, relevance of available topics, need for missing or desired topics, or the ability to find the right contacts if they have questions and need guidance.
Promotion of and willingness to use the company’s hotline system. This should include willingness to report concerns and fear of retaliation.
It’s always a good idea to have open-ended text boxes at the end of each section or at the end of the survey to allow the participant to expound their responses or raise an issue that has not been covered in the survey.
Know your audience
You’ll want to grab a good cross section of your company, including upper management down to line employees and across all functions. I’ve always found the most meaningful results come from middle management. Perhaps more important with regard to evaluating your results, you will want to identify participant information that is tracked in data fields, which will give you ways of slicing your data in more meaningful ways. This should include, in addition to name and title, their business unit, location, and function. You can substitute specific titles with more generic authority levels, such as “non-managers,” “managers,” “business unit management,” or “executive management,” to give you a more meaningful way of looking at the results. If you want to allow anonymity in the responses, you can still have the participant self-identify the same data fields (excluding their name) at the start of the survey without compromising that anonymity. In those cases, use predefined fields to avoid variation in response. The ability to analyze the results across these data fields can be the difference between identifying a business unit with an apparent toxic culture or realizing your message is fizzling out at the mid-manager level.
Use a clear and consistent scoring scale as much as possible and make sure that scale is well explained and visible either on each question or at the top of each page. Be careful to distinguish on your scoring scale between someone who “doesn’t feel strongly one way or the other” and someone who simply doesn’t have an opinion because of lack of involvement in a certain area. Most online survey tools will allow you to include a “no response” answer that does not factor into your scoring. If you don’t include this option, there will be an inflated number of “doesn’t feel strongly one way or the other” responses that can dilute your results.
Don’t confuse the participant
At all times you must put yourself in the shoes of the participant when drafting your survey. Are the questions or statement clear, concise, and easily understood? Do you have inadvertent double negatives or contradictory wording in your questions? Is the wording simple and straight to the point? Does the participant understand the concepts or terms you are referring to?
Try to ask questions in a way that facilitates using the answer scale in a consistent manner. For example, on a 1–5 scale, the 5 could consistently be the most positive or affirmative result, while the 1 would be the most negative or dissenting result (or vice versa depending on the topic). Also, try to avoid jumping around between different question and response structures (“rank 1–5” followed by “pick all applicable” followed by “true or false” followed by “rank 1–5”). This allows the participant to get into a rhythm and focus on what the question is asking.
Define terms, especially compliance industry terms, so the participant can understand them. If necessary, include a glossary of terms that can be accessed on each page or a link of contacts that the participant can reach out to for clarification.
Make sure it is clear what you are asking for. For example, I often see questions asking the participant to rate risk as high, medium, or low. Does that mean assess whether the risk is applicable? Whether it is likely to happen? The potential impact? Maybe one or two of those? Or maybe all?
Have a few independent parties read your survey and provide feedback on any confusing instructions or wording prior to sending the survey out.