For a seminar paper I want to investigate the impact of Soft Power. I though about taking a stock company which got support from the state. But I don´t know how to measure soft power as variable. Any ideas or recommendations?
Our Approach to Quantifying Soft Power Up until now, all attempts to quantify or measure soft power have been based on multi-nation surveys conducted at a point in time. Such an approach has several inadequacies as itis based on very subjective poll responses, which might be influenced by the environmental factors at the time of the poll. A more robust approach should be based on getting timebound measures of a large set of variables or metrics from multiple sources. These metrics can be utilized to come up with a quantitative measure of soft power that can be measured continuously over time, rather than only once or twice a year.
Page 5
Our approach quantifies soft power based on continuously tracking hundreds of thousands of web sources, including news sites and social media, such as twitter feeds. Artifacts gathered from these sources are then classified into the five key components of soft power – Diplomacy, Socio-cultural Values, Information and Media, Business and Economy, and Education, innovation and Technology. Additionally, with sentiment analysis based on natural language processing, our system assigns a polarity and score to each artifact. Hence, our approach computes a positive-negative score for all five dimensions of soft power for all countries that the artifact refers to for each article. The scores are compared to historical frequency distributions and using time series analysis our algorithm decides if the keywords are following a historical trend or some event has had a significant impact on the keyword frequency, if the change is significant the algorithm adjusts the overall soft power score of the country based on a weighted measure of the five soft power dimensions. Soft Power Index (SPI) Development Methodology We have developed a unique big data methodology to develop SPI. In this section we provide an overview of our methodology that will give an experienced analyst an insight to our approach. In the first step we scope out the entities of interest for an aspect of SPI. For example, for SPI, we list out all entities, their attributes and inter-relationships that we need to tag and collect time series data on. For this step we utilize our IOIIG framework to list out Individuals, Organizations, Infrastructure, Institutions and Geographies of interest.• Individuals – Citizens, Non-citizens and leaders whose actions and reputations aresignificant in creating positive or negative public opinion, non-state actors, etc.• Organizations – NGO, political parties, etc.• Infrastructure – social, cultural, educational infrastructure by geographies• Institutions – Home Ministry, Ministry of Culture, Ministry of Education, etc.• Geography – Countries including their provinces, cities, neighborhoods. In the second step we identify a list of attributes that we need to tag and capture for each entity type. These attributes are driven by the list of actions, traits and sensors that we want to model for each entity. The third step is big data analytics and research. In this step we identify sources and then use map reduce to develop a database of parameters for the agent-based simulation as well as
Page 6
short list research theories that will be utilized for modeling the behavior and strategies of entities. For example: Individual – Individual behavior might be predicted using concepts from positive psychology or well-being theory. The agents will take actions to maximize their well-being. So, we tag actions by the entities that are targeted towards those goals, and how those behaviors are modified when new data become available. • Organizations--Drawing fro m concepts on organizational behavior, we can analyze how organizations try to increase their resources and take actions to enhance the wellbeing of their members and subsequently the well-being of the organization as a whole.• Infrastructure – what cultural and diplomatic infrastructure an entity is deploying in which country for what goals. In the fourth step we format and standardize the data collected for each entity type using map reduce approach and create a database. This provides us with an abstraction of the real-world entities. In this same step all the entities are configured for their behavior – actions that they take on other entities as well as how they respond to changes in the environment. In the last step of the process, we test the agent behaviors and calibrate them to the real-world outcomes. Validation and Error Quantification Approach. We consider the world as a system of systems. This system of systems is comprised of numerous heterogeneous (different cultures in each country), distributed (geographically isolated vs. landlocked countries), and coupled systems (trade practices and financialmarkets) that are large and complex in their own right consisting of industry, communication,political, etc. entities, which are also coupled. Two critical issues we consider in developingGII are:(a) the uncertainties in models, measurement data, and predictions lead to the need forrigorous analytic strategies that apply to both predictive models and data;(b) the dynamic nature of system of systems leads to the need for system state estimation andmodel updating strategies that adapt in parallel along with the real world.In order to make useful predictions of GII, the validity of our predictions are assessed througha continuous feedback process involving the two critical issues mentioned above.Characterization of uncertainties in data, model, and estimation
Page 7
Models and data contain uncertainties. In models, there are parametric uncertain ties associated with errors in the numerical values of model parameters and projections. Thereare also non-parametric uncertainties associated with errors in the model form (interconnects, linear vs. nonlinear). In data, there are uncertainties associated with the data source (websites, reports, databases), the timing of the data (when the phenomenon occurred and itwas reported), and rendering the data in a form that is conducive to quantitative analysis(semantics of language, curation of data). Predictive models must accurately reflect both themeasured data and the uncertainties in that data. For low-order models and stationary timeseries, it is usually sufficient to characterize the mean, variance, and higher order statisticalnature of uncertainty using single variable probability density functions. However, in a bigdata world system of systems with millions of parameters and thousands of data sources, multi-variable statistics must be used in a recurrent manner to track changes in the seprobability density functions as the real world evolves.In order to characterize uncertainty, we tap into a variety of data sources to generate a verylarge sample size of any one variable. For example, if the U.S. Soft Power overseas is themeasured variable of interest, then we utilize any and every data sources that are not relatedto military and/or economic effects to create a database of raw data. We then pass the rawdata through a series of map reduce processes to create a clean database. We then segmentthe data according to nation, geographical position, etc. to estimate the probability density function for this variable in a statistically meaningful way. Furthermore, we generate joint probability density functions that describe how soft power of other “rival” countries are related to ensure that coupling throughout the system of systems is incorporated. Multi-variable probability density functions are constructed and analyzed to verify existing sources of data and identify weaknesses that point towards new sources of data. For example, high variance may be detected in a measured public opinion variable of interest from one source or a time period relative to all others suggesting that this source is significantly biased in the way it was reported or compiled. We also consider the effects of data acquisition time and rate in constructing and analyzing these probability density functions. For example, certain bloggers may be more active over certain periods of time leading to a greater number of samples. During other time periods, these same bloggers may post material less frequently leading to a potential for aliasing in the data sources. Probability density functions of measured data are also analyzed to verify modeling assumptions such as weak coupling between two geographically separate countries. This analysis provides guidance on how to interpret model predictions in ways that honor
Page 8
variations in measured variables in the real world. For example, subject matter experts may disagree about the benefits of political pressure on fledgling democracies leading to probability density functions with significant variance. By understanding this source of variation upfront using multi-variable statistical analysis, interventions can be designed and evaluated to anticipate and manage these differences in opinion using predictive models.
(PDF) Measuring Soft Power. Available from: https://www.researchgate.net/publication/314141900_Measuring_Soft_Power [accessed Dec 20, 2022].
There are several soft power indices available (such as https://brandirectory.com/softpower/). That might be the easiest way to go, particularly if you find one that has been cited in research similar to yours. Using a stock company that get support from the state could be tricky. You'd need to differentiate true subsidies from government contracts. The latter could lead you back into a hard power discussion. Lockheed Martin and Raytheon are obvious examples of hard power plays from the commercial space, but even someone like Amazon or Microsoft could be seen as providing services via contracts that contribute to hard power.