Neural Networks applications in valuation of banner ad creative efficiency

The technical aspects of building a solution to our problem of predicting advertisement banner efficiency. Rectified Linear Units activation function. A simple neural network architecture, trustworthy model. Visualizing convolutional neural networks.

Рубрика Программирование, компьютеры и кибернетика
Вид дипломная работа
Язык английский
Дата добавления 13.07.2020
Размер файла 2,8 M

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Deciding on an advertising banner is not an easy task. Before creation of banners begins, a client and advertising agency have to identify the goal of the advertisement, the target audience, the message, the topic and many other things. Then, when those things are settled, an agency conducts a research on customer preferences, in pursuit of which it tries to identify what will ring with the customer, as no one would like to launch an advertisement which no one cares about. After that, banner designs are proposed. The same design is created in variations to give a focus group something to choose from. After the focus group gives its response, the agency carefully analyzes the data and with some expertise it chooses maybe two or three advertisement banners which will make it into the advertisement campaign for the next half of the year.

Now technically speaking, this is a logical way to choose the best banner creatives from nearly a hundred or even more images, but it has drawbacks. The choice of advertisement banners is largely based on the preferences of a focus group, and in spite of being the most logical thing to do it has its flaws. For one, people's preferences and what actually works can be different things. That is, focus group could have liked advertisement banner A better, but in fact advertisement banner B would be more effective. Of course, that is where the in-house expertise comes in and also speaks their voice on the advertisement banners and decides what will be used in the end. However, in spite of their expertise their choice can be largely influenced by the choice of focus group and this can be a problem. Because of this inconsistency between what works and what is liked, there has to be a way to choose an optimal solution.

The marketing specialists, which are conducting research do not know the preference of their customers, that is why they are exploring it. Customers in the focus group, on the other hand, know their preferences, or at least they think they do, but do not know much about advertising. Together, they are trying to help one another, but as was said above there can be inconsistencies. That is why in this work a convolutional neural network that analyzes the effectiveness of advertisement banners effectiveness was proposed as a solution.

The idea of this work was to train a convolutional neural network model, which would help marketing specialists to identify the most effective advertisement banner creatives out of the focus group preference pool. The model would be trained on creatives which were used in previous advertising campaigns and thus have gathered metric data, such as click-through rate, which will be used to assess effectiveness of the banner creative. Thus, a model will be able to learn from good and bad examples, and help to determine whether the chosen advertisement banner is actually effective. It would still be recommended to use mixed-method research, qualitative (focus groups) and quantitative (neural networks).

Such a model would bring value to the advertising agency in many forms. For starters, and this is the most obvious one which comes to mind, the agency would be able to choose advertisement creatives with best click-through rates. This in order will reduce the cost of click for CPM (Cost per mille) and some other advertising models, and therefore more results will be achieved without expanding the budget. Secondly, this will reduce time to make a decision between the advertisement banners, as the model will be able to distinguish effective and not so effective creatives. And this decision can be made even on pre-focus group stages, when a model would predict if the creative is effective or not before giving the bunch of them for focus group inspection. This in turn would reduce time and costs for the advertising agency while the focus group decides on the best banners, as they will get only the most effective ones and will not have to spend time on the other ones. Or if there is certain motivation behind such an approach, marketing specialists could choose to give the focus group both types of creatives and only then get model prediction to see if the model chose at least enough banners which focus group liked, but deemed effective by the model. And of course, as discussed above, this model would be a way to preserve expertise in the agency as talents may leave to preserve other opportunities and experts may take longer to hire, but the projects do not become put on hold because a manager has left the company. Thus, such a solution would make sure that even without an expert there is a trained algorithm that can make good calculated decisions, or at least assist marketing specialists in doing so.

To sum up, the values of a model, which would help to distinguish between effective and not effective advertisement banner creatives would help cut financial and time costs, as well as improve revenue for the client and thus for the agency, but also help to preserve the expertise within the company in form of trained algorithm, and therefore reduce adversity in operations, such as human error and sudden absence of an expert. This would both benefit the client and the agency in the long run, as well as their relations and improve synergy.

Results of computational experiments

The objective of this work was to train a model that would help assess the effectiveness of advertising banners. In this part, the data, the assumptions, the process of creating a model will be discussed as well as intermediate results of the training.

For the data, we had a little over four thousand advertisement creatives for many different companies and their performance metrics. These banners creatives did not include textual advertisements with them, so the model training will be done without accounting for the offer in the text accompanying the image. Also, unfortunately, the initial purpose of the advertisement campaign is not available and thus creatives can not be grouped together in categories. These categories would potentially comprise the marketing funnel blocks, like awareness, consideration, and purchase, for example. With that said, the limitations of this work have been acknowledged and outlined above. It is possible that future work will have these missing features added, but right now the main focus will be only the images of the advertisement themselves. Thus the objective of this work would be to train the model and find the visual features contributing to the effective advertisement banner creative. Also, it is worth noting that all advertisement banners come from paid social channels, such as Facebook, VKontakte, Odnoklassniki, and other social networks.

As a measure of effectiveness, the click-through rate was chosen. Let's discuss a little why exactly this metric was chosen for this purpose. This was one of the limited amounts of metrics available for this research, others being CPA, CPC, CPM, conversions, views, and clicks. It has to be noted that each advertisement banner could be constructed for different purposes. Some banners were created with spreading awareness in mind, while others would be used for retargeting purposes and so on. Unfortunately, as previously mentioned, these objectives are not available to us in the dataset at hand, thus some universal way to assess the effectiveness of the advertisement banners should be discussed. First, it has to be noted, that the statistics of each advertisement is limited by the advertisement campaign budget and thus absolute metrics are ruled out instantly as our effectiveness measurement. For example, if one advertisement campaign budget was 100,000 dollars and the others were only 1,000 it is very likely that the former will get many more views. The quality of the creative in the first campaign would have to be so bad to underperform the creative that such outliers are not even considered. It is assumed that the more budget is poured in the advertisement campaign, the higher the absolute metrics it gets, like clicks, views, purchases. With that being said, the relative metrics will be considered as candidates for the dependent value. At our disposal, we have only two such metrics, one being click-through rate, and the other one conversions. Conversions metric is not a good candidate since it measures the proportion of purchases to clicks, and purchases are not always the goal of the advertising. Of course, the end goal of advertising is to increase shareholders' value, usually through the increase in revenue, but it is not always that direct. More times than not companies are interested in advertising activity affecting their top and bottom line not directly through the advertisement itself, but by the long-run effect, it has on consumers' perception of the company and its brand. Because of that, there may not be a direct purchase associated with the advertisement. This leaves us with the click-through rate, the number of clicks divided by the number of views. This metric is usually associated with the interest the advertisement has generated for the product or service. Usually, the condensed offer is contained in the advertisement and an advertiser would like his potential customers to take a look at a more complete version of their offer on their web page. An advertiser is interested in potential customers clicking his advertisement and this happens when the advertisement is appealing to the potential customer. This, of course, has some relaxed assumptions, which in spite of being relaxed are not in any sense false. These assumptions may have limitations, but at least they are acknowledged and not in any way false.

With all assumptions laid out, we can finally move to the dataset. The dataset was composed of nearly four thousand advertisement creatives, half of which was, unfortunately, videos and GIFs. These types of advertisements were, unfortunately, had to be dropped, which left us only with around two thousand images. While the image is the advertisement type we were looking for not all of them were included, mainly due to the fact that they had a low amount of clicks and views, which led to high click-through rate numbers, such as 10%, while the average click-through rate was around 0.17%. Such outliers were dropped based on the threshold for clicks which was set. Number 40 was considered, assuming central limit theorem, as we assumed for simplicity sake that click-through rate follows a binomial distribution and with the larger number of clicks the distribution would be assumed normal. Also, we dropped the outliers based on the interquartile range. That left us with a little less than two thousand creatives. We received the sample distribution click-through rate of creatives as in Figure 1.

Figure 1. Sample distribution of advertisement banners click-through rate

The initial four thousand advertisement banners were not considered by any means a lot of data, thus it was expected that additional techniques to deal with small dataset will have to be introduced, and with half as many objects in the training set left after the cleaning of the dataset left no doubt about it. After inspecting these advertisements, the majority of them do not look like one another and seem completely different, and thus it is very much possible that they are drawn from different distributions. It is explained by the absence of categories for these advertisement banners.

From the research done in the literature review, it was determined that the most suitable model for the task would be a deep convolutional neural network. The problem with convolutional neural networks is that they usually require much more than two thousand image objects in the training set. Most state-of-the-art architectures were trained using datasets with tremendous amounts of data, amounting in some cases to millions of images. Thus we considered and implemented the following solutions to overcome this limitation.

First, we used pretrained model weights as was suggested in many articles from the literature review. This helped a lot since many identities are seen in the advertisements that the models considered were trained on. Of course, we have taken away the top layers of a neural network to add our layers, since the initial model was not trained to solve our exact problem.

Second, we used data augmentation techniques. It was the most complicated part to decide upon because technically speaking, we can not be certain if augmentations will or will not change the label of the advertisement banner and we do not have any way to check whether this is true. The only way would be to run AB-test, which would require quite a lot of budget given the two thousand images in the dataset. Thus we considered only the most conservative data augmentation techniques, which have been argued to preserve the label. These were horizontal flip, vertical flip, horizontal and vertical flip at the same time, and some rotation up to 30 degrees. Of course, it is acknowledged that in some cases it can alter the label of the image. With images containing text, this can be a problem particularly, however, our goal is not the text recognition but determining whether the advertisement banners efficiency. And of course, images rotated and flipped can be argued to have less appeal to the customer. However, since our dataset consists of advertisements that were used in paid social advertising campaigns, and can be spotted by the potential customer while browsing through phone, or lying in front of a laptop and etc., meaning some position which visually would imitate the data augmentation techniques which were applied to the dataset. And thus the argument can be made that these changes could affect the click-through rate of an image, but not significantly and therefore it can be assumed that these techniques are safe to use. So we have applied these data augmentation techniques.

Third, as we used the “body” of the model without its “head”, we could add any layers that we desired, which would help improve the accuracy of our model. In our case, we added a Gaussian noise layer in order for a model to generalize better. We also added a dropout layer, so that the parameters would not overfit on the data.

It is worth taking a step back, and take a look at what has been done in order to prevent model overfitting on the data, as these techniques have the same end goal, but they act a little differently from one another. So far, in order to improve generalization, we have applied data augmentation techniques and added Gaussian noise and dropout layers. However, they have different effects on the model. For augmentation, it helps the model to train the convolutional layers better, as they will now have to generalize better on data with more variability. The dropout layer helps the neural network to take a look not only at the final layer but also at the information coming from activation functions of other layers, as sometimes the information just does not come through from a very important convolution layer. And finally, the Gaussian noise layer helps the model not to focus on small features and take a look at the image from another angle, thus looking for more high-level features. With that being said, all these techniques were applied to help our model to identify higher-level features, so that it could generalize better.

So far, we have discussed how we can improve the generalization of our model, but have not stated the problem we are trying to solve. As was previously stated, we want to help identify efficient advertisement creatives. At the start, we had an ambitious idea to identify features that contribute to an increase in click-through rate and by how much, so the model would yield a continuous value and after the explainer would be applied, we would see that let's say a human face yields a 0.01% increase in the click-through rate. Unfortunately, the model did not yield any sufficient result, as the r-squared was negative and the mean absolute percentage error was over 1000%. This could be caused by either dataset having a low amount of images for such a task, or images being drawn from different distributions, or of course by the task itself, as there could be a lot of noise in the click-through rate value. Thus it was decided to approach this problem from a different angle.

In some cases, it is very valuable to have the prediction of the click-through rate, for example for purposes of media planning. However, in most cases, one could argue that the advertiser would be still better off by knowing whether the advertisement creative is “good” or “bad”. Good and bad are not very quantifiable measures, but we can substitute these terms by “higher than average” or “lower than average”, which would make perfect sense. And therefore we classify our advertisement banners as being higher than the average click-through rate creative and lower than the average click-through rate creatives. With that out of the way, we can start creating our model.

The model settings were VGG19 with pre-trained weights on ImageNet dataset but without the top of the neural network, so we could add our own layers. The layers we decided to add were Gaussian noise with the variance of 1. After we added a global average pooling layer, and finally a dense layer with 256 neurons and ReLU activation function. And then we added a dropout layer with a 50% coefficient to turn that proportion of weights off.

At first, it was attempted to train the model without data augmentations, Gaussian noise layer, and dropout layer. The model yielded an accuracy of around 58%. Accuracy discussed in this paragraph refers to test and validation accuracy, as they were roughly the same. Please note, that percentages given in this paragraph are rounded to integer numbers for simplicity's sake. This result was improved by adding a Gaussian layer, which added around 2% to accuracy, yielding 60%. The result was further improved by the addition of by addition of the dropout layer by roughly the same 2%. Finally, data augmentation was added, which improved accuracy by 4%, resulting in a total of 66%. Thus, all applied techniques improved the model's accuracy by a total of 8%, which supports the claims made in some articles in the literature review, that stack of generalization techniques works well together. Result is summarized in Table 1 below.

Table 1. Accuracy improvement with addition of new techniques and layers

Initial accuracy

58%

+ Gaussian noise layer

60% (+2%)

+ Dropout layer

62% (+2%)

+ Data augmentations

66% (+4%)

After initial training, where the learning rate was 0.1%, the model training continued with a lower learning rate of 0.01%, but this did not give any result and the model kept overfitting on its training data, as training progressed. Initializing models training with 0.01% learning rate also did not yield any results, leading to models setting at a local minimum and sometimes not even reaching the 62% accuracy mark. Higher learning rates only lead to models' faster decline in accuracy as training progressed.

Models such as VGG19, VGG16, InceptionV3, and ResNet50 were tried as an architecture for our convolutional neural network. However, all of them yielded pretty much the same results, plateauing around 65-66%. Thus unfortunately no architecture could be said performs better than the rest.

It is worth noting, that data after data augmentation could not be split for test, training, and validation the way it is usually done. If we create the augmentations and then split the dataset, the image augmentations of the same image will end up in the different groups and yield higher accuracy than it should be, for the reason that the model has already been trained on pretty much the same data. For example, if a particular car image and its augmentations end up being in the test, training, and validation samples, the model will be trained on this image, and when the time comes to validate the data, the model remembers pretty much the same data input and of course, it can recall its classification. When experimenting with the model, this split resulted in nearly 85% accuracy. Such accuracy was, of course, false. Thus another strategy was used in constructing the dataset. At first, the original dataset was split, and only then augmented data was added to the training data. This, of course, brought model accuracy down, but it was correct because it would have the same accuracy on the data if deployed in production.

It is also worth noting, that pretrained parameters were fixed during the training. For example, the VGG19 model had 20,024,384 non-trainable parameters and 131,842 trainable parameters. It was also experimented with freeing non-trainable parameters, but this did not yield any significant results on the test data.

Lastly, we have decided to visualize how the model differentiates between the more effective and less effective advertisements. We tried to visualize low-level elements out of interest to see if the model has caught on anything interesting in the beginning already. As a result it did not amount to anything of the essence, because it only learned some very low-level features, which can not be interpreted by humans. The result of visualizing the first layer is presented here for the curiosity of the reader and to demonstrate that they may not be very informative. They are shown in Figure 2.

Figure 2. Visualizing the first layer of our model

Low level features however do not help to understand how a model differentiates between advertising banners efficiency, at least to humans. In other words, this explanation is not reliable, because we do not understand it. Something that would make sense is if we could take a look at higher-level features or if the explainer would just point to us, what parts of the image actually contributed to the models' decision. This is what we did with the LIME method, the results of which are presented below in Figure 3.

Figure 3. To the left: initial images with highlighted explainer areas, to the right: only highlighted areas

Figure 3 has initial advertisement banners to the left with highlighted areas that contribute to the click-through rate. Unfortunately, the areas highlighted in these images do not seem to be meaningful and represent very low-level features of the image. However, in the second advertisement banner a piece of text is visualized, which could serve as a hint that the model favors advertisements with text. However, the text is present in the first advertisement as well, but the explainer did not highlight it. This could mean that not all text seems appealing to the model. All in all, the given explanations on the second image could be meaningful in some way, but not on the other two. More images should be inspected prior to using this model in the field to see if their explanations look reasonable.

With the text being highlighted in the second image, but not the third could bring us to a conclusion, that text does not always affect the click-through rate. However, different texts are written for different services and the fact that the explainer highlighted two letters, which are the beginning of the word “beauty” in Russian could point out that the model differentiates a little between the industries without having data for it and does not fully understand that.

To sum up, a convolutional neural network model was trained in order to differentiate between efficient and less efficient advertisement banners. The task however was not straightforward, as many assumptions had to be made prior. The data used was all images. Some of the state-of-the-art architectures were tried for the task, and the results were even further improved by further fine-tuning the model to achieve 66% accuracy on testing data. The explanation techniques were further applied to determine how the model makes a distinction between banners. Not all of its explanations seem trustworthy, but in some cases, it highlights high-level features, which could signal that the models' explanations bear the logic a human would look for, but requires additional training. Since many assumptions were made due to lack of data, if the dataset would be enriched by other data, like the text of the offer, industry, audience features, etc, the models' performance would very likely increase.

The model could be further improved to become more accurate and give more trustworthy explanations. For starters, there could be more initial data. To reiterate, the data being used can only be obtained if the banner was being used in the advertising campaign and thus requires a marketing budget to acquire its label. The advertisement banner can also have text on it, thus this text could also be used to evaluate the banners' efficiency. Many other attributes such as advertising channel, audience characteristics, the lifetime of the campaign, period of the campaign, etc, could be used to enrich the dataset. These attributes without a doubt would highly improve the model's result. With that being said, the model may not be ready for the field, but it is certainly a promising player, and with further development it could become a very important tool for the advertisement business.

Conclusion

The technological advances we have an opportunity to observe every day are astounding. In today's world, companies undergo digital transformations in departments they never thought to be digitalized at all. As was discussed in the beginning, some things we perceive as common today, probably even ten years ago was considered a fiction. And in this work, we explored one particular process, which could be improved by machine learning.

The topic of this work was inspired by the advances in machine learning, but was motivated by a particular need of the advertising agencies. This work is written during the global COVID-19 pandemic, which has brought crisis to many industries. Unfortunately, advertising came to be one of them with clients cutting their marketing budgets and turning down many long-term projects. With that many business operations have been disrupted simply due to the fact that no one was prepared for such catastrophe. In spite of being one of the most digitalized industries, quite a lot of advertising work is still done in person, with a group of people. Many processes are being adapted to the new “reality” as the crisis has left its mark on the business around the globe.

People are an integral part of the business, and these are not just loud words - in fact, people are the business, especially in advertising. There is an opinion that at some point AI will replace humans. While we are not going to try and prove this opinion wrong or right, it is worth mentioning, that AI can also empower individuals in many areas of their lives. The solution discussed and provided in this work was designed to help marketing specialists to improve effectiveness of their work - decrease time spent, optimize budgets spent, increase results of the advertising campaign, but in no way the solution was designed to replace them.

Machine learning models do not perceive the world the same way as humans do, and that can be used to the advantage of the business. Algorithms are not influenced by people's opinions, moods, impressions, which is why using them as a supporting tool for human-made decisions would be a great solution. In our particular case, the model was able to learn from nearly a year worth of data in probably less than four minutes of training, while that could prove challenging for a human to inspect two thousand images and their click-through rates to further try and find similarities between the images. But the human is more aware of the business environment and many other things that models could not yet perceive. And this was never the objective, as we only wanted to train the model to assess advertisement efficiency creatives in particular settings.

Of course, the models' perspective on the advertisement banners and their efficiency should be broadened by more relevant data, such as audience characteristics, period of advertising, text in the image, and so on. But it is on a human to make a decision on many other things, which are out of models' scope. As an example, the model will highly unlikely take into account a reputational damage inflicted upon the advertiser, which would also drive click-through rates of the company down. The model could still predict a good click-through rate, because the advertisement banner is actually great, but it would still get much less attention than it deserves, or could even bring more damage to the company. Of course, it can be argued that the model was trained on “good reputation” click-through rates, and would it have more data it would have predicted the click-through rate correctly for “bad reputation” scenario. Yes, it could have, but it did not. “Could have” is usually not good enough for the business. Such data may have not been possible to collect previously, and thus putting all the trust in the models' decision would be erratic. A human has to make a check of the model's prediction with reality. It is also very easy for a human to actually miss some pattern, which a machine learning model caught on very quickly. Thus it is clear that the particular solution discussed in this work would work best being together in the loop with a human.

As a result of this work, a convolutional neural network model was produced that classifies advertisement banners as efficient and less efficient ones. Many state-of-the art architectures were tried, but all of them performed at the same level. Their results were bolstered by adding additional layers which resulted in better model generalization. Data augmentation techniques were also applied, which increased the accuracy of the model further. As a result, a model achieved 66% accuracy on the testing data. The result is significantly better than random chance, but still not good enough for production use. Although, when visualized with LIME, the model provided some insights into its findings, which could prove useful in research and point out some features that were previously left unnoticed. The model could be further improved by enriching the dataset with more data, but also additional features as discussed above.

The “real life” application of this model, as an option, could be an internal agency API, which would be connected to a front-end interface, where creative departments could upload their advertisement banners to double check if the image they created would have a higher click-through rate. The interface would then show the probability with which it assumes this creative would perform well, also highlighting the areas of the image it thinks would contribute to that probability, positively and negatively. With that, the creative department would be able to guide their creativity with numbers. Such a tool would be likely used in internal media planning as well to double check their KPI estimates. The application to focus groups will save time and money cutting choice sets. Last, but not least, this model could be used for competitive advertisement analysis to get estimates of competitors' metrics. All these applications would drive data-driven decision-making in the organization.

As a final remark, it would be appropriate to appreciate the advancements that progress brings to us, as the neural network model visualization shown in the results section brought a new perspective on how one could view advertisement banners and could result in broadening professional expertise. It was a great experience to take a look at the advertisements with a neural networks point of view, and a great journey working on this paper too.

Bibliography

Liu, X., Wang, X., & Matwin, S. (2018). Interpretable Deep Convolutional Neural Networks via Meta-learning. International Joint Conference on Neural Networks (IJCNN), https://arxiv.org/pdf/1802.00560.pdf

Zeiler, M.D., and Fergus R. (2014). Visualizing and Understanding Convolutional Networks. European Conference on Computer Vision. https://arxiv.org/abs/1311.2901

Lundberg, S., & Lee, Su-In. (2017). A Unified Approach to Interpreting Model Predictions. Computing Research Repository, abs/1705.07874. https://arxiv.org/abs/1705.07874

Gosiewska, A., and Biecek., P. ( 2019). iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models. arXiv preprint arXiv:1903.114. https://arxiv.org/abs/1903.11420

Zintgraf, L. M., Cohen, T. S., and Welling, M.(2016). A new method to visualize deep neural networks. CoRR, abs/1603.02518. http://arxiv.org/ abs/1603.02518

De Veaux, R., Ungar, L. (1997). A brief introduction to neural networks, Technical Report, Williams College, Williamstown, MA. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.3938&rep=rep1&type=pdf applications banner neural network

Shorten, C., Khoshgoftaar, T.M. (2019). A survey on Image Data Augmentation for Deep Learning. J Big Data 6, 60 (2019). https://doi.org/10.1186/s40537-019-0197-0

Ribeiro, M., & Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. 97-101. https://www.aclweb.org/anthology/N16-3020/

Lecun, Y., & Bottou, L., & Bengio, Y. & Haffner, P. (1998). Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE. 86. 2278 - 2324. 10.1109/5.726791.

Krizhevsky, A. & Sutskever, I. & Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Neural Information Processing Systems. 25. 10.1145/3065386. http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf

Sultana, F., & Sufian, A., & Dutta, P. (2019) Advancements in Image Classification using Convolutional Neural Network. Submitted to 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks(ICRCICN 2018. https://arxiv.org/abs/1905.03288

He, T.,& Zhang, Z., & Zhang, H., & Zhang, Z.,& Xie, J. & Li, M. (2019). Bag of Tricks for Image Classification with Convolutional Neural Networks. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 558-567. https://arxiv.org/abs/1812.01187

Maggiori, E.,& Tarabalka, Y., & Charpiat, G., & Alliez, P. (2017). High-resolution image classification with convolutional networks. IEEE International Geoscience and Remote Sensing Symposium - IGARSS 2017, Jul 2017, Fort Worth, United States. https://hal.archives-ouvertes.fr/hal-01660754/document

Shu, M. (2019). Deep learning for image classification on very small datasets using transfer learning. Creative Components. 345. https://lib.dr.iastate.edu/creativecomponents/345

Цzgenel, C.F., and Sorguз, A.G. (2018). Performance Comparison of Pretrained Convolutional Neural Networks on Crack Detection in Buildings. 2018 Proceedings of the 35th ISARC, Berlin, Germany, ISBN 978-3-00-060855-1, pages 693-700. https://doi.org/10.22260/ISARC2018/0094

Potlabathini, H. (2019). Convolution Neural Network for Cooking State Recognition using VGG19. https://rpal.cse.usf.edu/reports/state_recognition_symposium_2019/2019-05.pdf

Kamran, S.A., & Sabbir, A.S. (2017). Efficient yet deep convolutional neural networks for semantic segmentation. https://arxiv.org/pdf/1707.08254.pdf

Perez, L. & Wang, J. (2017). The Effectiveness of Data Augmentation in Image Classification using Deep Learning. https://arxiv.org/abs/1712.04621

Wong, S., & Gatt, A., & Stamatescu, V., & McDonnell, M. (2016). Understanding data augmentation for classification: when to warp? https://arxiv.org/pdf/1609.08764.pdf

Inoue, H. (2018). Data Augmentation by Pairing Samples for Images Classification. https://arxiv.org/pdf/1801.02929.pdf

Fawzi, A., & Samulowitz, H., & Turaga, D., & Frossard, P. (2016). Adaptive data augmentation for image classification. 3688-3692. 10.1109/ICIP.2016.7533048.

O'Gara, S. & McGuinness, K. (2019). Comparing data augmentation strategies for deep image classification. IMVIP 2019: Irish Machine Vision & Image Processing, Technological University Dublin, Dublin, Ireland, August 28-30. doi:10.21427/148b-ar75

Gu, S.,& Pednekar, M., & and Slater, R. (2019) Improve Image Classification Using Data Augmentation and Neural Networks. SMU Data Science Review: Vol. 2 : No. 2 , Article 1. https://scholar.smu.edu/datasciencereview/vol2/iss2/1

Chatfield, K., & Simonyan, K., & Vedaldi, A. & Zisserman, A. (2014). Return of the Devil in the Details: Delving Deep into Convolutional Nets. BMVC 2014 - Proceedings of the British Machine Vision Conference 2014. 10.5244/C.28.6. https://arxiv.org/abs/1405.3531

Kang, G., & Dong, X., & Zheng, L., & Yang, Yi. (2017). PatchShuffle Regularization. https://arxiv.org/abs/1707.07103

Konno, T., & Iwazume, M. (2018). Icing on the Cake: An Easy and Quick Post-Learnig Method You Can Try After Deep Learning. https://arxiv.org/abs/1807.06540

Huang, Y., & Cheng, Y., & Chen, D., & Lee, H., & Ngiam, J., & Le, Q., & Chen, Z. (2018). GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism. https://arxiv.org/abs/1811.06965

Singh, K., & Chaudhary, A., & Kaur, P. (2019). A Machine Learning Approach for Enhancing Defence Against Global Terrorism. 1-5. 10.1109/IC3.2019.8844947. https://ieeexplore.ieee.org/document/8844947

Herlocker, J., & Konstan, J., & Riedl, J. (2001). Explaining Collaborative Filtering Recommendations. Proceedings of the ACM Conference on Computer Supported Cooperative Work. 10.1145/358916.358995. https://grouplens.org/site-content/uploads/explain-CSCW-20001.pdf

Kaufman, Sh. & Rosset, S. & Perlich, C. (2011). Leakage in Data Mining: Formulation, Detection, and Avoidance. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 6. 556-563. 10.1145/2020408.2020496. https://www.cs.umb.edu/~ding/history/470_670_fall_2011/papers/cs670_Tran_PreferredPaper_LeakingInDataMining.pdf

LeCun, Y.,& Bottou, L.,& Bengio, Y.,& and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, November 1998a. http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf

Размещено на Allbest.ru


Подобные документы

  • Понятие о нейронных сетях и параллели из биологии. Базовая искусственная модель, свойства и применение сетей. Классификация, структура и принципы работы, сбор данных для сети. Использование пакета ST Neural Networks для распознавания значимых переменных.

    реферат [435,1 K], добавлен 16.02.2015

  • Решение задач прогнозирования цен на акции "Мазут" на 5 дней, построение прогноза для переменной "LOW". Работа в модуле "Neural networks", назначение вкладок и их характеристика. Построение системы "Набор программистов" нечеткого логического вывода.

    курсовая работа [3,2 M], добавлен 26.12.2016

  • Модели оценки кредитоспособности физических лиц в российских банках. Нейронные сети как метод решения задачи классификации. Описание возможностей программы STATISTICA 8 Neural Networks. Общая характеристика основных этапов нейросетевого моделирования.

    дипломная работа [1,4 M], добавлен 21.10.2013

  • Технологии решения задач с использованием нейронных сетей в пакетах расширения Neural Networks Toolbox и Simulink. Создание этого вида сети, анализ сценария формирования и степени достоверности результатов вычислений на тестовом массиве входных векторов.

    лабораторная работа [352,2 K], добавлен 20.05.2013

  • Overview of social networks for citizens of the Republic of Kazakhstan. Evaluation of these popular means of communication. Research design, interface friendliness of the major social networks. Defining features of social networking for business.

    реферат [1,1 M], добавлен 07.01.2016

  • Information security problems of modern computer companies networks. The levels of network security of the company. Methods of protection organization's computer network from unauthorized access from the Internet. Information Security in the Internet.

    реферат [20,9 K], добавлен 19.12.2013

  • Тестування і діагностика є необхідним аспектом при розробці й обслуговуванні обчислювальних мереж. Компанія Fluke Networks є лідером розробок таких приладів. Такими приладами є аналізатори EtherScope, OptіVіew Fluke Networks, AnalyzeAir та InterpretAir.

    реферат [370,5 K], добавлен 06.01.2009

  • Сущность и понятие кластеризации, ее цель, задачи, алгоритмы; использование искусственных нейронных сетей для кластеризации данных. Сеть Кохонена, самоорганизующиеся нейронные сети: структура, архитектура; моделирование кластеризации данных в MATLAB NNT.

    дипломная работа [3,1 M], добавлен 21.03.2011

  • Consideration of a systematic approach to the identification of the organization's processes for improving management efficiency. Approaches to the identification of business processes. Architecture of an Integrated Information Systems methodology.

    реферат [195,5 K], добавлен 12.02.2016

  • Description of a program for building routes through sidewalks in Moscow taking into account quality of the road surface. Guidelines of working with maps. Technical requirements for the program, user interface of master. Dispay rated pedestrian areas.

    реферат [3,5 M], добавлен 22.01.2016

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.