fbpx

6 basic rules to follow when making changes to our newsletter


Those who keep track of our articles and are actively engaged in bulk mailing will have noticed our continued prompting to continuously experiment, test and change the newsletter you send. We are enthusiastic followers of frequent changes, because you can optimize your campaigns.

Too many EmailSYSTEM users are asking us to give them some simple instructions to improve the existing newsletter they are sending. They want to learn how they can see if a little change they made in their newsletter has brought better or worse results. Is there a trick or a technique? What is the right way to make changes?

In this article we present 6 rules that you must follow to see a substantial improvement in your newsletter delivery. To be able to see substantial and statistically significant results over time, you need to adopt a method in your submissions , μια στρατηγική. It is not difficult, we guarantee it, it just takes a little effort.

1) We know the basics of our newsletter

You need to know the basically measurable, Key, Key Performance Indicators (KPIs) the campaigns you are sending. You can not improve any campaigns if you do not know how and what to measure . You must know on average the rate of opening of your newsletter (open rate) , Click Through Rate, and even the conversion rate on your website (derived from the medium: email). EmailSYSTEM, in the statistics section, has a wealth of measurable data and pointers to get a clear picture of your shipments.
For example, we’re doing an experiment for a new campaign. We’re sending our campaign for the first time, which we call Campaign A, and we notice that it has a 20% opening rate. We’re making some changes to this campaign, we call it Campaign B, and we’re re-sending it. The opening percentage of B is 24%. We have a winner, campaign B is more successful than A.
But this is not enough to make a safe conclusion. If you have an average opening rate of 29% on your consignment history, then neither of the two campaigns A and B has been able to deliver more than our overall effort to our overall goal .

2) We are ONLY trying one change at a time

This is the key to successful (and really) results. Do not try two different templates or layouts with many changes to each other when it comes to comparing two campaigns. This comparison is not based on tracking dissimilar things. You must be able, after the tests you will be doing, you can specifically identify the only element that has increased or decreased your newsletter performance . If you compare two campaigns that have differences in buttons, images, word, or titles then it is extremely difficult to find out which of these changes led to better or worse results.

3) Send to random groups

If you have a group of contacts, then you copy and divide your contacts into two NEW random teams, Group A and Group B. If you have more than one team, you just have to build two NEW groups from the existing ones and move them there the contacts you will use. Selecting contacts and sharing in the two groups must be random . Want to gather empirical data, not biased!

The larger the test team, the more reliable the results will be. If you try the changes you make in small, different groups or subgroups you may not end up with statistically significant results.

4) We are testing changes at the same time and day

When you try different versions of the same newsletter, you need to make sure that they are sent on the same day and time. As we have said and in an earlier article, based on our own internal data, the best days for sending email are from Tuesday to Friday, especially in the morning. If you send a mission on Tuesday for campaign A and you decide to make campaign B on Friday, you can not figure out the uncharted, random factors that changed between Tuesday and Friday. So, your results will not be comparable.
You can send Campaign A on a specific day and time you want and send the campaign B sending at the same time.

5) We only measure what’s important

As mentioned earlier, you can experiment with many changes like changing all of the template, images, email theme, buttons. Each of these will also have a different impact on measurable results.
Let’s say you want to try out the performance in two different buttons where you need to find out which of the two most magnetises your recipients to click. With this change, what is important to count on is the click through rate (CTR), that is, how many recipients clicked on the button rather than the opening rate of the newsletter.

The opening rate is clear and very important, but it would be better to worry if, for example, we were testing two different titles on the subject of email. In this case, it really makes sense to measure which campaign has the highest rate of openness.

6) Make sure your results are statistically significant

Statistically significant is the result that has not been derived from random factors. For example, if you had a very low response to a campaign that was sent on Christmas Eve, this is not a rule, and you should consider that holidays may have affected the open rate.
If you do a variety of tests, pay close attention to factors that may affect your overall results. Feasts, elections, strikes always affect the results, and this should be taken into account when interpreting the statistics.
You should have a confidence level of at least 90% -95% to assume that your results are statistically significant. If the results of the statistics you have in your hands are not important, then the experiment you are doing at that time is considered to be unsuccessful.

Leave a Reply

Your email address will not be published. Required fields are marked *