Skip to main content
SearchLoginLogin or Signup

Alignment Assembly on AI and the Commons — Outcomes and Learnings

This report captures learnings from the Alignment Assembly on AI and the Commons, a six-week online deliberation of open movement activists, creators, and organizations about regulating generative AI.

Published onJun 11, 2024
Alignment Assembly on AI and the Commons — Outcomes and Learnings
·

The Alignment Assembly was organized by Open Future together with Creative Commons and Fundación Karisma.

The report was written by Shannon Hong, a technologist and writer from the San Francisco Bay Area and Open Future 2024 fellow, and Alek Tarkowski, Director of Strategy at Open Future.

Acknowledgments

The authors would like to thank their collaborators: Patrick Connolly coded the bespoke pol.is instance that we used for the conversation and provided admin support during the assembly. Alicja Peszkowska was responsible for outreach and engagement with assembly participants.

Thank you to Divya Siddarth, Saffron Huang and Flynn Devine of the Collective Intelligence Project, as well as Liz Barry of the Computational Democracy Project for advice throughout the project. Thank you to Paul Keller and Luis Villa for the feedback at various project stages. Gratitude to our thought partners Creative Commons and Karisma, particularly Anna Tumadóttir, Viviana Rangel, and Maria José Parra, for their contributions to the project. Thank you to our committed audience members, Open Knowledge Foundation, the Communia Association, and Derechos Digitales. Gratitude and appreciation to the thirty individuals who helped us with sense-making and understanding the results of our report.

Introduction

Artificial intelligence shapes and affects the Digital Commons; however, there is no consensus on AI's specific impacts on the commons and how advocates and stewards of the Digital Commons should seek to manage this impact.

Generative AI is built on the digital infrastructure of the commons and uses the vast quantity of images, text, video, and rich data resources of the internet: open science research, open source code, and various sorts of training data that is either public or openly shared. Most importantly, AI developers train their models on large amounts of content and data shared by a multiplicity of collections and repositories.

Access to the Digital Commons enables innovation and the development of systems that could become the next general-purpose digital technology. But these developments are not without risks and challenges: from bias and lack of transparency to energy consumption and environmental footprint, from new concentrations of power to impacts on creative work – these are all challenges that can influence the commons and need to be addressed.

To this end, the Alignment Assembly on AI and the Commons aimed to answer the question: What do open movement activists, creators, and organizations think about regulating generative AI? The Open Future, together with Creative Commons and Fundación Karisma, organized this conversation over six weeks between 13 February and 17 March 2024.

An alignment assembly is a combination of a survey and a conversation designed to inform policy debates and align technology development with collective values. It is a participatory conversation methodology developed by the Collective Intelligence Project using the online survey platform pol.is.

The Alignment Assembly on AI and the Commons built on previous joint work at the Creative Commons Summit, which took place in Mexico City on 3-6 October 2024. At this event, a group of 30 activists and experts discussed the regulation of AI in the context of Digital Commons. The result was a set of principles. The formulation of the principles was followed by an in-person alignment assembly, providing a first snapshot of areas of consensus and disagreement. Our goal with the recent virtual assembly was to reach a broader range of individuals and organizations from around the world.

The results of this process show that the emergence of generative AI is challenging established approaches to openness, sharing, and the Digital Commons. We found consensus around the need to consider values beyond openness and the imperative of public infrastructure, investment, and alternatives in AI. The principles for regulating generative AI that started this conversation received broad support, but the assembly has also revealed potential areas for refinement. We identified two groups with divergent perspectives that need to be reconciled: the Regulation Skeptics and the Interventionists. The differences in perspectives are, to some extent, regional, pointing to different dominant attitudes between North America, Europe, and other regions.

This report from the Alignment Assembly on AI and the Commons begins with an explanation of our methodology. We then review the results of the proposed principles for regulating generative AI, followed by an analysis of the key areas of consensus. We conclude with an analysis of the key differences between the two opinion groups.

Methodology

Between 13 February and 18 March, Open Future and its partners hosted a virtual, asynchronous Alignment Assembly on AI and the Commons. We gathered over 260 respondents from more than 40 countries to discuss and explore principles and considerations for regulating generative AI from the perspective of the Digital Commons.

Previous work on principles for regulating generative AI

The Alignment Assembly on AI and the Commons builds on work from the Creative Commons’s summit on AI and the Commons, which took place in October 2023. Open Future and Creative Commons hosted a workshop on generative AI and its impact on the commons during the summit. The group agreed and released seven principles for regulating generative AI. After the Summit, the principles were published “for further community discussion and to help CC and the global community navigate uncharted waters in the face of generative AI and its impact on the commons.” We are providing a copy of these principles as an annex to this report.

We treated the principles as a starting point, and we were interested in revealing the degree of alignment between activists, creators, and stewards of the commons and among different subsections of the open movement.

Alignment Assembly methodology and Pol.is platform

Alignment assemblies are experimental deliberative processes aimed at incorporating collective input into technology development processes. This methodology was pioneered by the Collective Intelligence Project (CIP), led by Divya Siddarth and Saffron Huang. Siddarth and Huang describe the alignment assembly model in the following way:

“Alignment, so that we can bring technology into alignment with collective values. And assemblies, because they assemble regular people, online and across the country or the world, for a participant-guided conversation about their needs, preferences, hopes and fears regarding emerging AI”.1

Alignment assemblies are part of a broader trend aimed at increasing deliberation and participatory governance of digital technologies. Citizen panels are a related, more advanced form of deliberative process that is gaining popularity, with citizen panels on AI and data being organized in Belgium and the United Kingdom.

Alignment assemblies typically take place online, although the Creative Commons assembly on AI and the commons is an example of one that took place in person. They are typically organized using Pol.is, a survey platform developed by the Computational Democracy Project. Pol.is is “a real-time system for gathering, analyzing and understanding what large groups of people think in their own words, enabled by advanced statistics and machine learning.” It is based on the concept of Wikisurvey, a survey that is collectively developed by users, with the set of questions expanding through input from participants.

The key feature of Pol.is is its ability to map out differences in opinion, group individual respondents by their opinions, and identify consensus that holds across groups who otherwise disagree. Pol.is conducts this analysis in real time and makes the data available to all participants. This creates a feedback loop that encourages users to add statements that further explore the issue in finer-grained detail. Each opinion group that pol.is identifies is then defined by a representative set of statements. Groups are “differently different,” meaning that the representative statements for group A are not the same as for group B.

Pol.is is not necessarily intended for decision-making but rather “for discovering unrealized possibilities in complex, conflicted situations involving widely diverse perspectives.”2

The pol.is report from our Alignment Assembly is available on the Pol.is website.

Target group and survey process

Our goal was to bring together people involved in building and supporting the Digital Commons. There are several terms used to describe this group, including open movement, Free Knowledge movement, or Free Culture movement.3 For the purpose of the assembly, we identified three key groups of stakeholders within the movement:

Activists and experts, including digital rights advocates and legal experts

Stewards, people from organizations that steward collections that are part of the Digital Commons such as Wikimedia, Open Access repositories, and heritage collections

Creators, people who create works that form part of the Digital Commons, broadly: not only visual artists and musicians but also researchers involved with open science or open source AI programmers

Taken together, these groups represent key stakeholders that build and steward Digital Commons, and from this perspective engage with generative AI technologies and their impact. Our hypothesis was that these groups would also represent “ley lines” in the conversation so their individual opinions would line up with their group identity. This hypothesis is not supported by the results of our assembly, and we discuss this further below.

Furthermore, we asked participants to select fields of open that they are active in,4 using a typology that we created based on mapping organizations and individuals active in the open movement. As we demonstrate below, some differences between representatives of these fields are visible.

Respondents first answered a demographic questionnaire, which asked about their level of experience in AI, the field of open that they are active in, nationality, and stakeholder group (creator, activist, steward). They were then funneled to the assembly, hosted on Pol.is.

In Pol.is, participants submitted and voted on short text statements; vote options were “Agree,” “Disagree,” and “Unsure.” To start the conversation, we planted seed comments, which are supposed to “set the tone of the conversation and teach the initial participants how to write good comments.”5 These included statements based on the above-mentioned principles for regulating AI models.6 Overall, 265 participants cast 13.327 votes on 140 statements.

Over the course of the assembly, we moderated statements on two criteria. First, relevance to the conversation — we removed statements that were obvious mistakes, statements that were beyond the scope of our conversation, and spam statements. Second, a difference between the statement and existing statements: we were mindful of voting fatigue, and we wanted to surface statements that would be additive to the conversation. However, we kept many comments that address similar issues with different tonality and expression, following advice from the Computational Democracy Project, the creators of Polis, that the way something is said is just as important as what is said. We had 23 seed statements. Participants submitted 188 statements, and we rejected 71 of them, resulting in a total of 140 statements that were voted on.

We approached representativeness with the mindset that increased participation would allow us to surface interesting ideas and see novel patterns. We hoped to draw insights that will spark discussion and drive the momentum among those involved in building and supporting the Digital Commons in understanding AI. We based our understanding on the breadth and depth of the open movement from Open Future’s report on the different fields of open.7 We hoped to gain insight into a broad swath of this Movement, and we wanted to ensure a distribution of perspectives across different fields of open to allow for more interesting possibilities to emerge. The results of the pol.is-based survey should not be understood as demographically representative in the way that traditional quantitative surveys are. The methodology provides qualitative insights into shared views and attitudes in the open movement.

We also faced limitations in analyzing incomplete data. We share these limitations of our methodology in the appendix.

Demographic analysis of participants

In the Alignment Assembly, 265 individuals voted, but there were some data limitations: we have demographic data for 231 individuals who filled out the demographic form.8

126 activists, 68 creators, and 37 stewards were counted, and they worked across all fields of open. The field most represented was Open Education, which had 54 total participants, followed by Open Culture and Open Software which had 36 participants each.

A minimum understanding of AI is critical for an informed discussion on this topic, so we asked about respondents’ AI understanding in the survey. While we did not specifically exclude people for their lack of knowledge, we found that our respondents overwhelmingly self-identified as individuals with expertise in AI (24%) or understanding of AI (71%), compared to those with limited or no experience with AI. We also asked respondents whether they use free or open licensing to share their works. 61% do so regularly, 30% occasionally, and 8% do not use open licenses.

Over 40 countries were represented, but our respondents were mostly clustered in the United States and Western Europe. The greatest number was based in the United States, the United Kingdom, Germany, Canada, and Italy. Latin American organizations like Derechos Digitales and Karisma, based in Mexico and Colombia, helped us translate and expand our reach in Latin America. This broader coalition was critical to understanding if there are regional differences in perspective. There were 107 respondents from Europe, 75 from North America, 19 from South and Central America, 13 from Africa, 13 from Asia, and 4 from Oceania.

Results

The starting point for our assembly was the principles for regulating generative AI models and so we begin the presentation of results with an analysis of the levels of support for the different principles. The open-ended and participatory nature of the pol.is survey means that the votes on additional statements can be treated as further exploration of the issues addressed in these principles.

Examining the Principles

One of the main aims of the assembly was to verify the principles that were defined during the 2023 Creative Commons annual summit. We wanted to see to what extent there is broader alignment around these principles. Data from the assembly shows considerable agreement, with five out of seven principles being supported by more than 80% of the respondents.9

Almost unequivocal support (95%) for the statement “It is important that people continue to have the ability to study and analyze existing works in order to create new ones” (Principle no. 1) is not surprising, as it expresses one of core underlying principles for advocates of Free Knowledge, open science and free software.

More interesting is the high level of support, by 90% of participants, for the statement “We should address implications of genAI for other rights and interests” (Principle no. 3), as it signals a need for a more expansive approach for activists that traditionally have been focusing on copyright. Only slightly lower is the level of support (83%) for the open movement to engage in “defining ways for creators and rightsholders to express their preferences regarding AI training for their copyrighted works” (Principle no. 2).

Respondents also support measures supporting the creation of public AI systems: investments into public computational resources (88%) and public training datasets (87%), which were the two parts of Principle no. 7.

There was lower support (77%) for measures that ensure that benefits derived by AI developers must be broadly shared among contributors to the commons (Principle no. 6).

We saw greater disagreement and unsureness in two key statements. 61% of respondents agreed that “The use of traditional knowledge for training AI should be subject to the ability of community stewards to provide or revoke authorization” (Principle no. 4). And less than half of respondents (48%) agreed that “Any legal regimes must ensure that the use of copyright protected works for training AI systems for noncommercial public interest purpose is allowed” (Principle no. 5). This result, with 16% against and 34% uncertain, spells a major shift for open advocacy, as exceptions for text and data mining (a broader category of which AI training is part) has been one of the main goals of advocacy related to the right to research.

Overall, the pol.is conversation has confirmed support for the majority of the principles, while two of them should potentially be revised.

The two opinion groups

Pol.is aggregates the votes and divides participants into opinion groups, based on an analysis of the combined responses. Opinion groups are made of participants who voted similarly to each other, and differently from other groups. Pol.is groups those respondents who provided seven or more responses — in our case, 211 individuals.

These respondents were divided by the pol.is algorithm into two opinion groups, Group A and Group B.

It is critical to note that groups are not divided on every statement; there is significant overlap in many statements. Thus, groups should be treated as two distinct factions that help understand internal divisions in the open movement on the debate on AI and the commons. Pol.is also does not provide a measure of how distinct or divided in opinions the various groups are, which may be an interesting area for future exploration.

The two groups are the Interventionists (Group B) and the Regulatory Skeptics (Group A), with the first being the dominant one. This group believes that AI regulation is needed to support the commons and that copyright should be used to address other concerns that can broadly be understood as ethical. The second minority group sees openness as facilitating innovation, is optimistic about new ways that AI tools can contribute to the commons, and believes that copyright is often not the right tool for regulating AI.

  1. Group B: The Interventionists (74% of participants)

Group B is the majority of our respondents, who see generative AI as exploiting creators and the commons and violating social norms and potentially even laws. They are also uncertain about the positive impact of AI on the commons and the need to use AI-based tools to create content in the commons. Therefore, they are Interventionists: in agreement with many policy proposals that limit AI development in order to limit corporate interests related to AI and to protect the commons.

  1. Group A: Regulatory Skeptics (26`% of participants)

Group A is more optimistic than Group B about the positive benefits that AI can have on the commons. They are more inclined to believe that the use of existing works in AI is valuable and reasonable and fits with the values and goals of open sharing. They are also keen to explore how AI-generated content can be part of the commons. They agree with Group B that the emergence of AI technologies might require introducing some restrictions to openness but tend to believe that copyright law is not the right tool for this purpose. They are the Regulatory Skeptics, more critical of specific proposals for AI regulation, and they tend to have an aversion to regulatory overreach. Group A comprises 56 individuals, which is 26% of our participants.

The two groups should not be understood simply as AI optimists and AI pessimists — and it’s worth noting that the two groups share a lot of views in common. In particular, they agree that AI technologies need to be regulated in some way and that approaches to openly sharing need to be modified due to the emergence of generative AI. Still, group A is closer to a traditional vision of open sharing and is critical of using copyright-based mechanisms to regulate AI. Group B, in turn, is interested in a wider variety of regulatory measures for managing the commons.

Group demographics

We have both the demographic data and grouping data from pol.is for 172 respondents. Exploring the demographic information between Group A and Group B, we looked at the ratios of different categories of respondents present in each group. There were no significant differences between Group A and Group B with regard to the share of activists, stewards, or creators. This is a surprising takeaway, as we had hypothesized that this axis would be divisive.

On the level of expertise, experts are far more likely than non-experts to be in Group A. 50% of those who self-identify as experts were in Group A, which is far more than the 20% of those who self-identify with less expertise.

Taking into account the various fields of open, there is a higher-than-average ratio of Group A members among participants who work in the fields of Open GLAM, Open Software, and Open Culture. In turn, there are proportionally more members of Group B among those who work in the fields of Open Education, Data, and Science.

Finally, in examining country data, Group A contains proportionally more North Americans than Group B, with 29 out of 55 respondents from Canada and the United States. At the same time, all respondents from the UK and Germany (29 total) were in Group B. These samples were too small to allow for meaningful analysis of regional differences. Still, the results suggest possible differences in attitudes towards AI, the shape of policy debates, and acceptable solutions between different parts of the world.

There are also visible differences between the two groups regarding the more divisive of the seven principles. First, traditional knowledge and community authorization (Principle no. 4) was more controversial in Group A, where over 36% of the group disagreed and 20% were unsure. In turn, Principle no. 5, concerning the use of copyrighted works for noncommercial AI systems, was controversial for Group B members, where 20% disagreed and 38% were unsure.

On Consensus and Division

One of the main goals of pol.is is to surface which statements are divisive for the participants. The algorithm clusters participants into distinct opinion groups based on these distinctions. In addition, Pol.is provides tools that allow users to add their own new statements — so that the issues can be further explored in fine grained detail.

In this section, we highlight key issues on which there is consensus and division. Where there is divisiveness, this paper will go deeper to tease out the threads of why people disagree, using comments that address related problems with different language and expression.

Consensus

This Alignment Assembly revealed three key areas of consensus. First, the prioritization of values beyond openness in considering the open movement’s policies towards AI (related to principle no. 3). Second, the need for public investment in AI (related to principle no. 7). And third, the call for the open movement to make education about AI and its impact, and public-facing communication on AI a priority.

Beyond openness

Considering values beyond openness alone resulted in the emergence of a new key area of consensus — possibly the most interesting result of the assembly. Almost all participants agree that consideration of ethics in AI is just as important as openness of AI systems (statement 110) and that openness is not the only value relevant for activists, creators, and stewards in the commons (statement 18). More important is the result from statement 13, where 72% of respondents do not agree that “any restrictions to sharing, including ethical ones, are against the spirit of “open.” We read this result as a sign that the open movement believes that the emergence of AI technologies spells fundamental shifts in how openness is defined and what limitations are considered acceptable. That this group agreed that the “spirit of open” allows for ethical restrictions is in some ways, an affirmation of the changing nature of the open movement.

Public investment in AI

Public investment, public alternatives, and public good were a resounding consensus point for this community. Both Group A and Group B voted overwhelmingly for public investment that serves the public good, and both desired non-commercial public alternatives and public participation in AI. `This suggests that advocates for openness are strongly aligned with those promoting ideas such as Public Digital Infrastructure, AI systems as digital public goods, or public options for AI.

Thought leadership

Two statements that were ranked highly in group-informed consensus were on the open movement’s thought leadership on AI. The first of them (statement 31) is a call to support citizens in developing skills to both understand and critique AI, and the second (statement 146) calls for the open movement to be involved in the public-facing conversation around AI. The open movement can be insular, and perhaps these statements indicate a desire for the movement to become more influential in guiding public opinion around AI.

Division

The key areas of division between Group A and Group B were, first, the extent of AI’s exploitation of the commons; second, the role of AI in producing commons-based resources; and third, the use of copyright as a legal tool in discussing AI. As facilitators of this conversation for the open movement, we wish to hold these areas of division with reverence so as not to aggravate or cause further divides between individuals who hold these diverging opinions, but rather to help us more coherently work together with awareness of difference in our movement.

Exploitation

Group B believes that Generative AI developers and providers, and the systems that they create, exploit creators and the commons for profit. Group B has a consistently negative impression of AI’s legality, ethics, and value. Group A mostly disagrees or is unsure about the extent to which AI is exploitative. Some statements around exploitation and generative AI are emotionally charged – see, for example, statement no. 23. Nevertheless, it’s interesting that nearly 30% of all respondents agreed that GenAI is an “effort by big tech to devalue creators’ labor” (statement no. 23). On the other hand, Group A is almost universally against this statement.

Overall, 63% of individuals in our participant pool believe that generative AI is exploitative of creators (statement 46), indicating that the open movement may need to devote more attention to concerns of exploitation and, more broadly, of AI-related impact on creative labor.

The commons and AI tools

A significant discussion point was how AI should be used as a tool for creating the commons. 59% believed that there is a benefit to AI-generated contributions to the commons (statement no. 33), with this proportion being much higher in Group A than in Group B. However, there was significant disagreement and confusion on whether or not AI should be used to generate educational resources (statement no. 39, 48% said no) and a desire to steward verified, human-crafted reference points (statement no. 29, 68% want commons repositories to be human-crafted). The answers suggest that the distinction between human and synthetic (AI-generated) content is relevant for the commons, with possible policy considerations for both types of content.

While there is agreement that generative AI can be considered an opportunity to reconsider current copyright legislation, groups differed significantly in how this reconsideration should be done. Group A is generally against most suggestions that were offered on how copyright law should be used to shape the training of AI models and categorically against using copyright to specifically “blunt the harmful effects of AI and automation on creators and workers” (statement no. 11) While Group A wants to steward ethical AI broadly speaking, it also believes that copyright is the wrong tool to consider this mission.

Although participants see opportunities for new legislation and legal innovation, there is significant disagreement on how AI applies to current copyright law. Many are unsure whether AI works should be protected by copyright and, if so, how. Overall, there is significant confusion and uncertainty on the right path in copyright legislation and policy related to generative AI. It is worth noting that the two groups have opposing views on the statement, “AI models should be barred from training on “All rights reserved” works without an explicit license.

High levels of “unsure” responses for some statements (11, 32) signal a lack of clarity among participants on specific regulatory proposals — suggesting both the emergent nature of these debates and the opportunity for broader outreach and education on these issues. The Computational Democracy Project team offers that in pol.is conversations on technical topics, high levels of “pass” indicate individuals’ willingness to receive new information and to hold off forming their opinion until further equipped. These are positive qualities associated with a culture of learning.

Conclusion

We would like to highlight four high-level conclusions from the Alignment Assembly.

First, the emergence of generative AI challenges established approaches to openness, sharing, and the commons: there is a consensus that the open movement should consider values other than openness alone. Our research shows that a large set of factors, beyond the emergence of generative AI, contribute to this change in attitude among movement members. Revisiting the principles and norms that underpin the various fields of open will be as relevant as establishing shared positions on regulating generative AI.

Second, the seven principles on regulating generative AI that emerged from the Creative Commons summit received high support. While further refinement is needed to secure as broad an endorsement as possible, this consensus indicates that the open movement will be able to reach a firm set of principles for regulating generative AI.

Third, public investment, public accountability, and public infrastructure in AI are issues with high consensus among the participants. Both groups believe in the benefits of public involvement in AI, which suggests a clear direction for advocacy and movement-building

Fourth, the Assembly shows significant differences in views between participants from North America and the rest of the world, or at least from Europe. While there is agreement on the principles, there are regional differences with regard to some of the key and most divisive statements. These differences can impact advocacy work and suggest that cross-regional dialogue is needed to explore these differences and seek shared advocacy positions.

This Pol.is-based Alignment Assembly is one step in defining a shared position and a unified response to changes related to AI technologies. It has had the value of being swift and asynchronous and, therefore, relatively inclusive. But we also know that many people whose voices should be heard and whose opinions matter were not present in this conversation. And so we acknowledge that pol.is conversations do not offer means for a deeper exploration of issues being discussed. We hope that the results of the Assembly will serve as a basis for further explorations of shared positions on regulating generative AI.

Annex 1: Principles for regulating generative AI models

The following seven principles for regulating generative AI models were formulated during a workshop on AI, creators and the Commons organized by Open Future and Creative Commons. The workshop took place on 3 October 2023 in Mexico City, as a side event to the Creative Commons Summit 2023. The principles are meant to ensure that regulation of AI technologies serves to protect the interests of creators, people building on the commons (including through AI), and society’s interests in the sustainability of the commons.

The original version of the principles can be found on the Creative Commons site.

Background considerations

Recognizing that around the globe, the legal status of using copyright-protected works for training generative AI systems raises many questions and that there is currently only a limited number of jurisdictions with relatively clear and actionable legal frameworks for such uses. We see the need for establishing a number of principles that address the position of creators, the people building and using machine learning (ML) systems, and the commons, under this emerging technological paradigm.

Noting that there are calls from organized rightholders to address the issues posed by the use of copyrighted works for training generative AI models, including based on the principles of credit, consent, and compensation.

Noting that the development and deployment of generative AI models can be capital intensive, and thus risks resembling for (or exacerbating) the concentration of markets, technology, and power in the hands of a small number of powerful for-profit entities largely concentrated in the United States and China, and that currently most of the (speculative) value accrues to these companies.

Further noting that, while the ability for everyone to build on the global information commons has many benefits, the extraction of value from the commons may also reinforce existing power imbalances and in fact can structurally resemble prior examples of colonialist accumulation.

Noting that this issue is especially urgent when it comes to the use of traditional knowledge materials as training data for AI models.

Noting that the development of generative AI reproduces patterns of the colonial era, with the countries of the Majority World being consumers of Minority World’s algorithms and data providers.

Recognizing that some societal impacts and risks resulting from the emergence of generative AI technologies need to be addressed through public regulation other than copyright, or through other means, such as the development of technical standards and norms. Private rightsholder concerns are just one of a number of societal concerns that have arisen in response to the emergence of AI.

Noting that the development of generative AI models offers new opportunities for creators, researchers, educators, and other practitioners acting in the public interest, as well as providing benefits to a wide range of activities across other sectors of society. Further noting that generative AI models are a tool that enables new ways of creation, and that history has shown that new technological capacities will inevitably be incorporated into artistic creation and information production.

Principles

  1. It is important that people continue to have the ability to study and analyse existing works in order to create new ones. The law should continue to leave room for people to do so, including through the use of machines, while addressing societal concerns arising from the emergence of generative AI.

  2. All parties should work together to define ways for creators and rightsholders to express their preferences regarding AI training for their copyrighted works. In the context of an enforceable right, the ability to opt out from such uses must be considered the legislative ceiling, as opt-in and consent-based approaches would lock away large swaths of the commons due to the excessive length and scope of copyright protection, as well as the fact that most works are not actively managed in any way.

  3. In addition, all parties must also work together to address implications for other rights and interests (e.g. data protection, use of a person’s likeness or identity). This would likely involve interventions through frameworks other than copyright.

  4. Special attention must be paid to the use of traditional knowledge materials for training AI systems including ways for community stewards to provide or revoke authorisation.

  5. Any legal regime must ensure that the use of copyright protected works for training generative AI systems for noncommercial public interest purposes, including scientific research and education, are allowed.

  6. Ensure that generative AI results in broadly shared economic prosperity – the benefits derived by developers of AI models from access to the commons and copyrighted works should be broadly shared among all contributors to the commons.

  7. To counterbalance the current concentration of resources in the the hands of a small number of companies these measures need to be flanked by public investment into public computational infrastructures that serve the needs of public interest users of this technology on a global scale. In addition there also needs to be public investment into training data sets that respect the principles outlined above and are stewarded as commons.

Annex 2: Limitations of the pol.is methodology

Incomplete demographic and voting data.

Our aim was to collect additional demographic data through a Typeform survey displayed on the conversation’s website. The survey was filled by 230 individuals out of 265 who participated in the pol.is conversations. Furthermore, pol.is groups only those respondents who voted on at least seven statements. This meant that, in our case, only 211 individuals were grouped. Both issues ultimately reduce our sample size. All told, out of 292 interactions with either the Typeform form or the Pol.is survey, 172 respondents had both been grouped and had reconciled demographic information.

Challenges with iterative surveying

Pol.is methodology assumes that respondents will return to the survey multiple times in order to review new statements and add their own. While we saw some activity by returning respondents, overall we observed limited iterative engagement. This is addressed to some extent by the pol.is a system whose algorithm selects which questions are displayed to both new and returning respondents based on the relevance of these questions for establishing groups, consensus, and division.

Outreach and inclusion

We made an effort to make the conversations inclusive and, in particular, took care to conduct outreach in various languages and regional networks. The Pol. tool offers automatic translation of statements, and in addition, we translated all content, with the help of the Karisma Foundation, into Spanish. Nevertheless, the response rate from Global Majority countries was low.

Limited means for in-depth conversation

Experts who helped us with sense-making commented that they would want to understand more about the rationale of someone’s vote or statement. Similarly, as we analyzed the results, we asked: how could we get the context or lived experience that informed the statements that participants contributed? What does it mean to name these groups and interpret their interest areas? How much further do we need to expand the conversation before coalescing on a movement-wide set of policy positions? More research into the positions and attitudes of activists, creators, and stewards from the commons is needed.

Annex 3: List of statements

Below is the list of the 140 moderated statements participants voted on, including 23 seed statements and 117 submitted by participants. The screenshots in the report above show the original numbering from the pol.is system. Note that the numbers differ between the screenshots and this list because in its listings, pol.is includes statements that were moderated out. The additional discrepancy in numbering is due to the numbers assigned by Pol.is starting at no. 0 not no. 1.

Seed statements

  1. It is important that people continue to have the ability to study and analyse existing works in order to create new ones

  2. We should define ways for creators and rightsholders to express their preferences regarding AI training for their copyright works

  3. We should address implications of genAI for other rights and interests (data protection, use of a person's likeness or identity)

  4. The use of traditional knowledge for training AI should be subject to the ability of community stewards to provide or revoke authorisation

  5. Any legal regimes must ensure that the use of (c) protected works for training AI systems for noncommercial public interest purposes is allowed

  6. Benefits derived by developers of AI from access to the commons and (c) works must be broadly shared among all contributors to the commons

  7. There is need to promote public investment into public computational resources that serve the needs of public interest on a global scale

  8. We must foster public investment into training datasets that are stewarded as commons.

  9. Organizations that steward collections and repositories should label content as human or synthetic.

  10. Transparency of training data should be a requirement for any AI model or system.

  11. Without investment into broadly available training datasets, large companies will dominate AI.

  12. Copyright should be used to blunt harmful effects of AI and automation on creators and workers.

  13. Requiring that AI training is done only with content licensed by rightsholders will help large media companies, but won't help most artists and creators

  14. Any restrictions to sharing, including ethical ones, are against the spirit of “open”

  15. A project that shares resources only with known organizations and people can still be meaningfully open.

  16. I am worried that AI will make it even harder for artists to make a living.

  17. AI systems should have the same rights as humans to read and consume material.

  18. AI reduces people's motivation to share works openly.

  19. Openness is not the only value our community should care about.

  20. AI training is just like any other use of openly licensed content.

  21. Open repositories (OA journals, GLAM collections, etc.) need special strategies to cope with AI

  22. Addressing negative environmental impacts of AI is part of the conversation about AI and the commons.

  23. Governments should build their own large language models.

Statements submitted by participants

Please note that the numbers in this annex should continue the numeration from the list of the seed statements., which means that they should start at 23 and continue to 139. In the original list, the statements started at 0 which meant that there were

  1. GenAI produces mostly lousy art and bad text, but is being sold as transformative in an effort by big tech to devalue creators' labor.

  2. Like last bubble's killer app, paying ransomware with cryptocurrencies, the "best" of genAI is deepfake porn and political misinformation.

  3. AI should enhance, rather than replace, human cognition & creativity. It follows that AI legislation should also be human-centric in design.

  4. In light of AI advancements and data sovereignty issues, the "open" movement should consider a rebrand

  5. All AI works should not be protected by copyright.

  6. Non-copyrightable AI work can benefit open culture, for example Wikipedia.

  7. AI-generated photos cannot replace human-taken photos and will mislead people.

  8. Being open means being against extractive economies of all kinds from minerals through to data

  9. AI has fundamentally changed what being "open" means

  10. Open approaches in an AI era need new business models

  11. Open must always be transparent

  12. AI should not be used to generate educational resources (e.g. Wikipedia, Oxford dictionary)

  13. The open movement is in crisis at present

  14. Let's separate the idea of 'open' from the idea of 'business'

  15. Big tech and openness are mutually exclusive

  16. AI models contain no copyrightable human expression and so are not copyrightable.

  17. All gen AI produced works are in the public domain constituting a new kind of synthetic commons.

  18. Gen AI exploits creators and the commons for profit without permission, credit or compensation.

  19. We need alternative public good AI systems which the public can participate and opt in to.

  20. The open community must speak to differences in the AI hype/marketing vs actual uses

  21. AI work may be copyrightable if there is a high degree of human involvement.

  22. We need to limit AI to being used militarily (e.g. cyberattack).

  23. AI generated content is only in the public sphere once a person has decided to publish it. The responsibility is still on individuals.

  24. There is no "AI works". There is only work done by, and published by, people assisted with various tools.

  25. Wikimedia can benefit from AI assistance, but its community will want to maintain editorial and publishing power to keep its validity.

  26. AI brought forward the questions of ownership and fair use. We've benefitted from not having to worry about it until now.

  27. Just like a low-effort snapshot photograph is copyrightable, so too should AI-assisted works with low human involvement be copyrightable.

  28. By default LLM should be prevented from crawling the web; a developer specified flag should indicate whether it is acceptable to download.

  29. AI systems must be able to identify when they are hallucinating or inventing information.

  30. The "open science" community has already done great work to offer competitive models with limited external investment.

  31. AI systems must always offer a means of exporting user data for the purposes of data portability.

  32. There must be non-commercial open public alternatives to closed corporate AI systems.

  33. AI systems should not undermine human autonomy or agency.

  34. Users of AI systems must be granted ownership of outputs created through their interactions and inputs.

  35. Users of AI systems should receive a share of profits if their data or usage trains or improves the AI’s capabilities.

  36. AI-generated articles should not be allowed on Wikipedia, even if the content is accurate.

  37. GenAI violates norms and probably laws but is somehow protected by being "at scale" and by being associated with vast monied interests.

  38. Commons repositories such as Wikimedia projects must remain human-crafted to provide a verified reference point

  39. Support citizens in developing the basic skills needed to understand AI, GenAI and mainstream applications with a critical approach

  40. Current open source licenses aren't applicable to AI models as models don’t contain any source code.

  41. The personification of AI serves to undervalue true human contributions to arts and sciences

  42. Open Community has more to gain by focusing on Redress and Terms of Service rather than try to control the Hype cycle

  43. A library or museum should be set up to collect AI-generated works.

  44. AI systems must disclose how their continuous long-term use could influence behavior, habits, perceptions, or mental health over time.

  45. Openess in AI systems should be aligned to ethical principles.

  46. AI should not generate "fake" photos of real people.

  47. Open GLAM will benefit from digging deeper into studying and examining models for open data, licensing and attribution from open science.

  48. If open, AI can increase access to knowledge

  49. Big AI can be curtailed by regulating Big Data.

  50. For an AI model to be meaningfully considered open source, all training data must be public and re-usable for other open source AI models.

  51. Large scale, automated used of openly licensed works is a mark of success. AI has issues, but it should not shape our definition of openness

  52. The open movement has focused on freedoms and permissions at the expense of other norms. The issue around AI just highlights this.

  53. Educational organizations need easy and direct access to safe, open, transparent and independent AI systems.

  54. Along all the risks it bring with it, GenAI can also be considered an opportunity to radically reconsider current copyright legislations

  55. For the importance of linguistic diversity, LLM Gen-IAs should be programmed in native languages to avoid language extinction

  56. The open movement should be paying attention also to sovereignty related to traditional and Indigenous knowledges and cultural works.

  57. Indigenous data and content raises particular ethical questions for openness

  58. Basic info about how GenIA works should be included in every educational program, starting with primary school or even kindergarten, asap

  59. The Open Source Definition's requirement that open source allow all uses is in tension with current or future AI laws.

  60. Open source can't solve all the world's problems.

  61. Predictability for users of openly-licensed works is more important than reducing legal liability for creators of openly-licensed works.

  62. Open should focus on collaboration, leaving ethical issues to be solved through government.

  63. The open movement needs a 'decolonial turn' to acknowledge its roots in the global north and explore what openness means for global south

  64. Openness can be highly exploitative esp. when it concerns data that represents or is collected by/from people in underserved communities

  65. Openness in AI systems is not binary (open vs. closed). We should rather think of a gradient of openness relating to different AI elements.

  66. Advances in the generation of synthetic data will lessen or eliminate many commons-based concerns over generative AI systems.

  67. Open-source models release model weights and code under an open-source license. A description of what data was used is sufficient to be open

  68. Orgs & people involved in the Open movement need to be actively involved in public-facing conversations surrounding the use & training of AI

  69. The principle that publicly funded research should be publicly available should be expanded to computed science.

  70. Generative AI will necessitate the rethinking of copyright, attribution, and royalty structures.

  71. Technological approaches to tracking provenance, trust (ex.distributed ledger-based solutions) are a promising way improve LLM data quality

  72. The open movement should work on a open GenAI model.

  73. AI can aid in generating educational resources but needs human supervision.

  74. Copyright holders may not be the original creators of the work so focusing on copyright is not necessarily best to address artist job loss

  75. Commercial use of AI models trained on commons based data should require contributing a % of revenues back to the commons.

  76. The training & fine-tuning of AI by everyone around the world will only happen if they are contributing to a widely available open platform.

  77. Democracies should consider regulating chatbot speech in election-related contexts. Otherwise, regulation should focus on use cases.

  78. To save resources and keep things simple, we need a focus on AI-sufficiency: Does this system require AI at all?

  79. Integrity in science we don't know what it refers to, such as the case of predatory journals that operate online (not always open-access)

  80. AI should be able to make use of anything existing for its training, just like humans are permitted to do.

  81. While copyright is helpful in regulating Big Tech short term, overemphasis is likely to cause further centralization of power long-term

  82. Governance is integral to an understanding of Commons, and Commons is therefore a better concept than Open, which lacks the same emphasis.

  83. An AI model should be considered a derivative work of all the data it was trained on.

  84. Works based on an AI model should be considered a derivative work of all the training data it was based on.

  85. Works based on an AI Model should be considered a reproduction of the training data is based on.

  86. AI models should be barred from training on "All rights reserved" works without an explicit license.

  87. The Commons should be cultivated to provide open training data for all purposes.

  88. KI wird massiv zur gesellschaftlichen Zerstörung beitragen und sollte verboten oder stark reglementiert werden.

  89. Si bien internet es una fuente pública de conocimiento, los modelos de IA deben pagar por entrenar sus bases de datos si buscan el lucro

  90. Every source online should disclose whether or not it's content has been AI generated

  91. I consider myself an activist.

  92. The Commons should be cultivated to provide open training data to its members; big tech should get access by paying fees

  93. Just because something is publicly available, it doesn't mean that it's ok to exploit that resource for private profit

  94. I feel more respected when I'm informed whether something I'm reading is written by an AI vs a human.

  95. Publicly available data should be accessible to AI developers, however, there should be opt out systems for creators.

  96. Solutions to many challenges and concerns related to AI (ethics, labor, harm, copyright) begin with AI literacy across all education levels.

  97. Data repositories should be funded to devise/implement interoperable approaches for data AI-readiness checks, metadata and distribution.

  98. There are adequate technical tools to determine if an AI has been trained on a specified dataset.

  99. There should be an agreed way of specifying the person legally responsible for the output of a generative AI.

  100. Professional codes of conduct have a part to play in regulating generative AI.

  101. Generative AI should have an independent legal personality, like a corporation.

  102. No plantear criterios de transparencia en la IA podría afectar la sostenibilidad del patrimonio y conocimiento de comunidades ancestrales

  103. Diseñar un sistema que permita a creadores excluir su contenido de los datos usados para entrenar GenIA es un arma de doble filo

  104. AI-generated synthetic media presents critical new issues around trust and authenticity.

  105. Generative AI requires from societies and citizens of the world to Rethink everything! We should not waste this crisis.

  106. Open should also mean "efficient" for re-use and focus on proportionality

  107. Indigenous communities should be involved in the development of training datasets in their mother tongue and their ancestral knowledge

  108. Indigenous communities should be involved in the stewarding of traning datasets, ethical requirements, alignments testing and AI governance

  109. The biggest issue with GenAI is a lack of transparency and accountability.

  110. Open licenses should be adapted to allow licensors to select different permissions for AI training

  111. Using works to train AI should be based on opt-outs, not opt-ins. We should focus in creating standards for opting-out of AI training.

  112. I don't think I understand enough to make a coherent statement or identify what is missing or needs to be added

  113. AI systems should not be used for military purposes.

  114. It depends on how the generated AI art is generated & it's artistic merits

  115. There should be a global movement for ethical, open AI training data sets to lower competitive barriers and improve AI

  116. A wikipoll like Pol.is is a good start but we need actual deliberation to collectively answer the complex questions this conversation poses.

  117. Private commercial entities are likely to exploit issues of trust and authenticity around synthetic media to gain control over the commons.

Comments
3
?
cgise cgise:

https://lookerstudio.google.com/embed/s/oQQ3vj2HCAA
https://lookerstudio.google.com/embed/s/gnnr_FlbsIM
https://lookerstudio.google.com/embed/s/iJuYvAPbH_4
https://lookerstudio.google.com/embed/s/s-G5k1EBVaM
https://lookerstudio.google.com/embed/s/hLwu0sanaWM
https://lookerstudio.google.com/embed/s/huR3aRQZOwo
https://lookerstudio.google.com/embed/s/tjdwihOfI18
https://lookerstudio.google.com/embed/s/nDb1AAI_ZTU
https://lookerstudio.google.com/embed/s/g3FYJxiGlkE
https://lookerstudio.google.com/embed/s/n_dl3cyXO4Y
https://lookerstudio.google.com/embed/s/o9U3LnLfw9s
https://lookerstudio.google.com/embed/s/r48TTz-U74M
https://lookerstudio.google.com/embed/s/i_-L-isOzQ0
https://lookerstudio.google.com/embed/s/qtDtpNEIRUM
https://lookerstudio.google.com/embed/s/t1rdSXQY7BQ
https://lookerstudio.google.com/embed/s/nzeaav-nAME
https://lookerstudio.google.com/embed/s/h0mxqHMGk_E
https://lookerstudio.google.com/embed/s/mrnyja3YTTc
https://lookerstudio.google.com/embed/s/ihptmYZWewo
https://lookerstudio.google.com/embed/s/ioutlDePptI
https://lookerstudio.google.com/embed/s/viAr79NMlGs
https://lookerstudio.google.com/embed/s/gkZOogwovKo
https://lookerstudio.google.com/embed/s/oQ7ntOOXYSQ
https://lookerstudio.google.com/embed/s/sM5Egxb-stI
https://lookerstudio.google.com/embed/s/sm9TXCyEqJA
https://lookerstudio.google.com/embed/s/rnmtlJ6FM00
https://lookerstudio.google.com/embed/s/ncufc-FDsnE
https://lookerstudio.google.com/embed/s/lV-yqWSFOhs
https://lookerstudio.google.com/embed/s/teIb8cRy7Ng
https://lookerstudio.google.com/embed/s/iepSRou97As
https://lookerstudio.google.com/embed/s/im0IGANw8Fw
https://lookerstudio.google.com/embed/s/j7vi0lriI9I
https://lookerstudio.google.com/embed/s/uM55jLaD7dM
https://lookerstudio.google.com/embed/s/j0Gut8Pk-Jc
https://lookerstudio.google.com/embed/s/gF2RPLPGFyw
https://lookerstudio.google.com/embed/s/k5MZvrYnPXY
https://lookerstudio.google.com/embed/s/g4SC25Yqlcw
https://lookerstudio.google.com/embed/s/v9_qwkbz_wY
https://lookerstudio.google.com/embed/s/jbUJzc5DDRw
https://lookerstudio.google.com/embed/s/iqoUvnLHa4A
https://lookerstudio.google.com/embed/s/guHos73fgWA
https://lookerstudio.google.com/embed/s/grVNn5jLOAk
https://lookerstudio.google.com/embed/s/p2lnOWTOGH8
https://lookerstudio.google.com/embed/s/hvBGwVLmBlQ
https://lookerstudio.google.com/embed/s/jxr76Q1weEk
https://lookerstudio.google.com/embed/s/mhpQhnQsaj4
https://lookerstudio.google.com/embed/s/pTpLrN3wwbQ
https://lookerstudio.google.com/embed/s/p9TWZq3NKTs
https://lookerstudio.google.com/embed/s/q7R7zmXACos
https://lookerstudio.google.com/embed/s/kMfy2rAxxpQ
https://lookerstudio.google.com/embed/s/vTfsMJA8FVg
https://lookerstudio.google.com/embed/s/moMYhkHZ2So
https://lookerstudio.google.com/embed/s/uyURbUXm9eA
https://lookerstudio.google.com/embed/s/s-gDwKLSwe8
https://lookerstudio.google.com/embed/s/pUDyJspIDaA
https://lookerstudio.google.com/embed/s/rOPM9kk4LmQ
https://lookerstudio.google.com/embed/s/jASAC5StPbU
https://lookerstudio.google.com/embed/s/vMGEK6yYlcY
https://lookerstudio.google.com/embed/s/ljKy_QTerxU
https://lookerstudio.google.com/embed/s/pyhEPKxFofc
https://lookerstudio.google.com/embed/s/p3Rrn6YowQ0
https://lookerstudio.google.com/embed/s/iIGPshBSpUk
https://lookerstudio.google.com/embed/s/s3aeu6_3WhQ
https://lookerstudio.google.com/embed/s/qypN7sa8iJA
https://lookerstudio.google.com/embed/s/uYWc-G0LlFw
https://lookerstudio.google.com/embed/s/nys0hQLAzJU
https://lookerstudio.google.com/embed/s/kUjOowwckHg
https://lookerstudio.google.com/embed/s/gHR4GArY-9Q
https://lookerstudio.google.com/embed/s/q0XryRJ17Dg
https://lookerstudio.google.com/embed/s/n5UuUMi22fE
https://lookerstudio.google.com/embed/s/habjQW-ycZo
https://lookerstudio.google.com/embed/s/tZWksjaizZ0
https://lookerstudio.google.com/embed/s/ij4x09O69Nk
https://lookerstudio.google.com/embed/s/sCkAzRu8sXA
https://lookerstudio.google.com/embed/s/mqJbZK859Ww
https://lookerstudio.google.com/embed/s/iibUN1u0IVw
https://lookerstudio.google.com/embed/s/voLemyUp5YQ
https://lookerstudio.google.com/embed/s/rVTt1FDcUOw
https://lookerstudio.google.com/embed/s/tOtOrSCfUFI
https://lookerstudio.google.com/embed/s/vP5btsI8KvE
https://lookerstudio.google.com/embed/s/gVexENTmdjE
https://lookerstudio.google.com/embed/s/kD-uhwNxaPQ
https://lookerstudio.google.com/embed/s/heS67F0Mfy4
https://lookerstudio.google.com/embed/s/sDMHkAyC1qE
https://lookerstudio.google.com/embed/s/jGfPDmH3lF4
https://lookerstudio.google.com/embed/s/q3tfE56SZcI
https://lookerstudio.google.com/embed/s/p7uftq3-mOA
https://lookerstudio.google.com/embed/s/uzpsBEorEQw
https://lookerstudio.google.com/embed/s/m9Rcs3-eCjA
https://lookerstudio.google.com/embed/s/kpgJHpJIlVI
https://lookerstudio.google.com/embed/s/jqEJgHCjeGU
https://lookerstudio.google.com/embed/s/gcSuIsWCx2M
https://lookerstudio.google.com/embed/s/sJbKnIaPPVw
https://lookerstudio.google.com/embed/s/jdCLdrGUIiY
https://lookerstudio.google.com/embed/s/jWY4aRab-58
https://lookerstudio.google.com/embed/s/mR-b3n6cek0
https://lookerstudio.google.com/embed/s/qaWAEcBptuE
https://lookerstudio.google.com/embed/s/uVGxdolKojw
https://lookerstudio.google.com/embed/s/jiJSZT6XZQg
https://lookerstudio.google.com/embed/s/iCXv2CVfj9w
https://lookerstudio.google.com/embed/s/kuW8TPHvV_c
https://lookerstudio.google.com/embed/s/l6K3E9Brfuk
https://lookerstudio.google.com/embed/s/sYyJNlUwPAU
https://lookerstudio.google.com/embed/s/u12L0Lv6r34
https://lookerstudio.google.com/embed/s/tqslBwyzbLs
https://lookerstudio.google.com/embed/s/gzbcHfo-7R4
https://lookerstudio.google.com/embed/s/g4tw63Wqn74
https://lookerstudio.google.com/embed/s/vT98_JWkLhw
https://lookerstudio.google.com/embed/s/mtQlcDgYX98
https://lookerstudio.google.com/embed/s/i8HHSdwgCLQ
https://lookerstudio.google.com/embed/s/lKotjTx6sNI
https://lookerstudio.google.com/embed/s/hOyCo21uohw
https://lookerstudio.google.com/embed/s/jXL2OvTRFkA
https://lookerstudio.google.com/embed/s/uvLUONcxzIw
https://lookerstudio.google.com/embed/s/srU1ZCeEQBQ
https://lookerstudio.google.com/embed/s/tCvPFHa_Wys
https://lookerstudio.google.com/embed/s/vlaoSl_V2Eo
https://lookerstudio.google.com/embed/s/nU4R1ds_z_w
https://lookerstudio.google.com/embed/s/sLOAnCfj5cI
https://lookerstudio.google.com/embed/s/mkwSiTKE1J8
https://lookerstudio.google.com/embed/s/gyJPwaxL91M
https://lookerstudio.google.com/embed/s/lHa1j4gzGVI
https://lookerstudio.google.com/embed/s/oZx9BDQKg8Q
https://lookerstudio.google.com/embed/s/vB8Z3uKHyS8
https://lookerstudio.google.com/embed/s/jemTxf32rYY
https://lookerstudio.google.com/embed/s/rgKddMxHizo
https://lookerstudio.google.com/embed/s/gF-JSegUmgo
https://lookerstudio.google.com/embed/s/smHHm00b69k
https://lookerstudio.google.com/embed/s/tWKY5W_Cf7s
https://lookerstudio.google.com/embed/s/ouK3NJYmCoU
https://lookerstudio.google.com/embed/s/gAlrfTy08wE
https://lookerstudio.google.com/embed/s/rDh70XlkI18
https://lookerstudio.google.com/embed/s/ndNsARuZeu8
https://lookerstudio.google.com/embed/s/gzwb2ZiABao
https://lookerstudio.google.com/embed/s/vRJrgZGZ9oQ
https://lookerstudio.google.com/embed/s/pTNDjUMz-1U
https://lookerstudio.google.com/embed/s/qoq0Eub4xCc
https://lookerstudio.google.com/embed/s/tW_go2xyJH0
https://lookerstudio.google.com/embed/s/hUgxbkpjKxU
https://lookerstudio.google.com/embed/s/kFwCPwb6pT8

https://www.serialzone.cz/uzivatele/232287-seynzctzlnbsvrlt/
https://www.fdb.cz/clen/210883-metwiqcqdapmzrlf.html
https://starity.hu/profil/516180-qlnmkvedwapbzawv/
https://files.fm/yrvsruaiugahfrqz/info
https://tinhte.vn/profile/mgwoqhggxidlsoge.3185796/
https://www.outdoorproject.com/users/ofvjcxybcmwadogp-jwxejnpdbbpwbkvf
https://muckrack.com/iwrdmsnlwgmlbtfs-pcrwjsblwvgmfsny/bio
https://bootstrapbay.com/user/hkinqiqrwnhnwnzg
https://www.bricklink.com/aboutMe.asp?u=nelzazqpdktbopb
https://findaspring.org/members/hqbhpvjpaayeagfl/
https://schoolido.lu/user/nqgiuhlbvefqmaao/
https://www.tai-ji.net/members/profile/3326128/rlflrplsdnwtlttl.htm
https://www.thepetservicesweb.com/members/profile/3326130/oiymmxfurpkjmbfa.htm
https://www.greencarpetcleaningprescott.com/members/profile/3326131/oqgguicvfhfbcvli.htm
https://app.roll20.net/users/15284881/utkxyvinvoucwnbq
https://pbase.com/dytwzraqikpinjbz
https://www.furaffinity.net/user/tbszicdgfksyysjy
https://www.beatstars.com/ietczohstsjmldkd/about
https://3rd-strike.com/author/pmmjgofmsfdptluj/
https://www.elephantjournal.com/profile/ojmlkjgybjatmnui/
https://www.facer.io/user/1uuiw7QUuP
https://www.bitsdujour.com/profiles/ll3ciG
https://ilm.iou.edu.gm/members/iqozpsztntrspjrx/
https://espritgames.com/members/45242120/
https://www.rwaq.org/users/l52zf1ai84-20241129162222
https://www.faneo.es/users/alsggdkctawonact/
https://kurs.com.ua/profile/72470-ycyxedrzkfywzald/?tab=field_core_pfield_11
https://www.egresadosudistrital.edu.co/virtualcourses/forums/users/pugmecfkqkjrhssp
https://kbs.knutsford.edu.gh/profile/phcbtfmjtfnzdpxw/
https://ucgp.jujuy.edu.ar/profile/trnzspcuyjruyjui/
https://lms.aimms.edu.pk/profile/rgzvxpvnhylmambo/
https://www.colmayor.edu.co/foro/profile/uhbetfvgfrfrzldi/
https://alumni.cusat.ac.in/members/vixzrrujfjreunnr/profile/
https://kerbalx.com/anxuyvsexddcqsuk
https://learn.mystudyseries.co.nz/members/wqsvxykkhbufbykc/
https://www.halaltrip.com/user/profile/182030/yszwqjzqneeeotj/
https://yamap.com/users/4257190
https://profile.hatena.ne.jp/ttnbbrcycmgdxhvb/
https://hackmd.io/@dfyhtmilvvmpyncn/HkM-wLPQ1l
https://scrapbox.io/gtdqomdyiwqbnfzr/rqgrrtfauzwxjjwe
https://www.deviantart.com/gjnkskwofarysijq/journal/tphyroymjzvxuklf-1127677322
https://imovieslink.hashnode.dev/kndjdfmcbpuryawa
https://blog.libero.it/wp/zqbljjcdezfikqof/
https://blog.libero.it/wp/zqbljjcdezfikqof/2024/11/29/towmdblxnimldsbv/
https://palmserver.cz/modules.php?name=News&.file=comments&.sid=15910&.tid=210009
https://palmserver.cz/modules.php?name=News&.file=comments&.op=Reply&.pid=210009&.sid=15910
https://dictanote.co/n/1118467/
https://dojour.us/e/38666-sebkvvlffdzdwxvs
https://www.divephotoguide.com/user/edqlhmaxzmpfmksn/
https://bulkwp.com/support-forums/users/objoowijxmjgldry/
https://www.astrobin.com/users/iiuvofsbbizgkyww/
https://www.retecool.com/author/nlfigrxoqdqtpxhl/
https://www.yamareco.com/modules/yamareco/userinfo-917376-prof.html
https://bato.to/u/2325789-ebrnfvbgthgwaltx
https://info.undp.org/docs/dao/UNSP2015/Lists/PostSurvey/Item/displayifs.aspx?ID=137148
http://sharkia.gov.eg/services/window/Lists/List/DispForm.aspx?ID=117048
http://www.alexandria.gov.eg/Lists/List30/DispForm.aspx?ID=85053
http://monofeya.gov.eg/citizens/cases/Lists/List38/DispForm.aspx?ID=93008
https://jsbin.com/nusufituyu/edit?html,output
https://jsfiddle.net/aqL0utx3/
https://kuku.lu/t101ab
https://wokwi.com/projects/415903551196807169
https://rextester.com/XQNAS88574
https://wow.curseforge.com/paste/8feecade
https://telegra.ph/lznspkipeimxpijl-11-29
https://graph.org/aqhfrxeclgqbzstw-11-29
https://te.legra.ph/sfwwoiimgsbfshlq-11-29
https://snippet.host/ryoifq
https://pastelink.net/td5ikek0
https://freepaste.link/public/e6jj53rgak
https://www.etextpad.com/6jvlkdz6k8
https://yamcode.com/pobptzanayfnvtzc
https://pastebin.com/pht5e8r8
https://paste.tc/vviqwplcwiuvgefn
https://paste.feed-the-beast.com/view/c0f64ecc
https://paiza.io/projects/83kti3BaLBx0wAm_UckJfQ
https://mlx.su/paste/view/25741c8d
https://paste.rs/xxer2.html

?
cgise cgise:

https://lookerstudio.google.com/embed/s/mWbwfK44n_Y
https://lookerstudio.google.com/embed/s/r9Hy4_SD9iI
https://lookerstudio.google.com/embed/s/gMM12q8Lxkw
https://lookerstudio.google.com/embed/s/tFUhpCRfmJo
https://lookerstudio.google.com/embed/s/v2czYMoAdsM
https://lookerstudio.google.com/embed/s/s8cK-B9KAfg
https://lookerstudio.google.com/embed/s/psZ67KoG_O4
https://lookerstudio.google.com/embed/s/hp5-ADWlEFU
https://lookerstudio.google.com/embed/s/mcMN0t8tops
https://lookerstudio.google.com/embed/s/lWHbj0v6pio
https://lookerstudio.google.com/embed/s/gpPAGM6ZI9c
https://lookerstudio.google.com/embed/s/uW2RA-bU-_s
https://lookerstudio.google.com/embed/s/keFMuxebLBs
https://lookerstudio.google.com/embed/s/v8ArGZlA4eQ
https://lookerstudio.google.com/embed/s/tvBV6_zZ6PY
https://lookerstudio.google.com/embed/s/icUVjqxZR20
https://lookerstudio.google.com/embed/s/nsrjPkJ7MhA
https://lookerstudio.google.com/embed/s/o32bGVEqep4
https://lookerstudio.google.com/embed/s/hrfJh3YcVYQ
https://lookerstudio.google.com/embed/s/igl2kfySxjU
https://lookerstudio.google.com/embed/s/kEWFNHyyYro
https://lookerstudio.google.com/embed/s/ghzZVHRnkNI
https://lookerstudio.google.com/embed/s/t3yJ-xL4DVY
https://lookerstudio.google.com/embed/s/vRDCvRkMPRs
https://lookerstudio.google.com/embed/s/hh9w7W__Ekw
https://lookerstudio.google.com/embed/s/iqMxds_FSkk
https://lookerstudio.google.com/embed/s/uzfc85k_sHw
https://lookerstudio.google.com/embed/s/mbFn2vyDJ78
https://lookerstudio.google.com/embed/s/h2Hjzcvvrrc
https://lookerstudio.google.com/embed/s/kNc0f2ZfN2k
https://lookerstudio.google.com/embed/s/mcMUe9AbAz8
https://lookerstudio.google.com/embed/s/vWKCtbH53Yc
https://lookerstudio.google.com/embed/s/h1aCrH9nAGU
https://lookerstudio.google.com/embed/s/uI8cRgGrFTY
https://lookerstudio.google.com/embed/s/mv_PGrNFW0A
https://lookerstudio.google.com/embed/s/g8qSFKvx0Rk
https://lookerstudio.google.com/embed/s/jviuFcDT2Ys
https://lookerstudio.google.com/embed/s/hS9XJ5bAf2c
https://lookerstudio.google.com/embed/s/hKqJGdZxtr8
https://lookerstudio.google.com/embed/s/to-K-8uT0Mw
https://lookerstudio.google.com/embed/s/gaJLCZC_Qiw
https://lookerstudio.google.com/embed/s/jwl8BECFNCU
https://lookerstudio.google.com/embed/s/iBPy0xQ7StU
https://lookerstudio.google.com/embed/s/opBbQIkFRxk
https://lookerstudio.google.com/embed/s/vIgfptZzQiI
https://lookerstudio.google.com/embed/s/m_ZrXSIGWTU
https://lookerstudio.google.com/embed/s/oIT-p1cU0VA
https://lookerstudio.google.com/embed/s/t7mRK483ZAo
https://lookerstudio.google.com/embed/s/jAK9uQsMKr4
https://lookerstudio.google.com/embed/s/idmsrBJHKqE
https://lookerstudio.google.com/embed/s/lBag3Ul9HKo
https://lookerstudio.google.com/embed/s/qkeQC8GCTII
https://lookerstudio.google.com/embed/s/i4T9vqpYv5I
https://lookerstudio.google.com/embed/s/pBcXIWEtC8A
https://lookerstudio.google.com/embed/s/nCLbpEdtayg
https://lookerstudio.google.com/embed/s/nAlOATxfyG4
https://lookerstudio.google.com/embed/s/tmWxVHf-XH0
https://lookerstudio.google.com/embed/s/l4p-6GquvV8
https://lookerstudio.google.com/embed/s/osw3loKSRJ8
https://lookerstudio.google.com/embed/s/qUyDybzvNIg
https://lookerstudio.google.com/embed/s/i4xv8q8c5bs
https://lookerstudio.google.com/embed/s/n4_HvSv4fds
https://lookerstudio.google.com/embed/s/stWzb6vF7IY
https://lookerstudio.google.com/embed/s/sfqJZjfUEfw
https://lookerstudio.google.com/embed/s/uXTXrr5bLRQ
https://lookerstudio.google.com/embed/s/glD5_pWCsWc
https://lookerstudio.google.com/embed/s/gp00U0AQ2BU
https://lookerstudio.google.com/embed/s/svzi1vDq5eU
https://lookerstudio.google.com/embed/s/kQrlFfylgCY
https://lookerstudio.google.com/embed/s/q-Gifm-INTY
https://lookerstudio.google.com/embed/s/k8ykFxR608M
https://lookerstudio.google.com/embed/s/jhlutS7R2UA
https://lookerstudio.google.com/embed/s/m2_xyMVFU4A
https://lookerstudio.google.com/embed/s/v9aiZYvEZiE
https://lookerstudio.google.com/embed/s/iOrctQNSkHg
https://lookerstudio.google.com/embed/s/qOn3wMrg7Sc
https://lookerstudio.google.com/embed/s/lB0s9R9wAp4
https://lookerstudio.google.com/embed/s/kTIPrycsQFE
https://lookerstudio.google.com/embed/s/qE_Fv17g-2w
https://lookerstudio.google.com/embed/s/i0lE4WaMSMA
https://lookerstudio.google.com/embed/s/sy48jM3cK08
https://lookerstudio.google.com/embed/s/s0cZKVyFHDg
https://lookerstudio.google.com/embed/s/gaLDMssP8g0
https://lookerstudio.google.com/embed/s/m4qFB2sKCbY
https://lookerstudio.google.com/embed/s/tT38PZDX5Kc
https://lookerstudio.google.com/embed/s/tvqxeJpC5AI
https://lookerstudio.google.com/embed/s/occ7EAOXRSk
https://lookerstudio.google.com/embed/s/ld6X16LO5iM
https://lookerstudio.google.com/embed/s/nxJA3zPHWL4
https://lookerstudio.google.com/embed/s/uf93lrF2qRs
https://lookerstudio.google.com/embed/s/p9cnvFeQius
https://lookerstudio.google.com/embed/s/gkCwHBugMGw
https://lookerstudio.google.com/embed/s/jOMsdLUFVOM
https://lookerstudio.google.com/embed/s/lkZk1aZc8vg
https://lookerstudio.google.com/embed/s/g_k-2YUJSSs
https://lookerstudio.google.com/embed/s/pHj9XWIOjEM
https://lookerstudio.google.com/embed/s/icqX58UuuwI
https://lookerstudio.google.com/embed/s/ppIMDoBQb2k
https://lookerstudio.google.com/embed/s/sKkwp1OhU9w
https://lookerstudio.google.com/embed/s/g2_tpdIIGug
https://lookerstudio.google.com/embed/s/gdFjSr9tVE0
https://lookerstudio.google.com/embed/s/ukQrwvNnFx8
https://lookerstudio.google.com/embed/s/uSfSRrhYE_U
https://lookerstudio.google.com/embed/s/jq914sH82II
https://lookerstudio.google.com/embed/s/hFs1CVFazmc
https://lookerstudio.google.com/embed/s/gJfZPF2mbe0
https://lookerstudio.google.com/embed/s/lvyKu9wKZLY
https://lookerstudio.google.com/embed/s/vbHjBSZKgfA
https://lookerstudio.google.com/embed/s/hNuxTHxOUq8
https://lookerstudio.google.com/embed/s/lTiHHjh-O-Y
https://lookerstudio.google.com/embed/s/isuqlMeeV_Q
https://lookerstudio.google.com/embed/s/ilr0FDFMTCE
https://lookerstudio.google.com/embed/s/my-63Do-SpQ
https://lookerstudio.google.com/embed/s/g9gsr3pSztM
https://lookerstudio.google.com/embed/s/sol60hgr47c
https://lookerstudio.google.com/embed/s/qXItcbm3dio
https://lookerstudio.google.com/embed/s/t0hAmEi5BNM
https://lookerstudio.google.com/embed/s/qZxigzVTLPo
https://lookerstudio.google.com/embed/s/tWCCfAIjwOI

https://palmserver.cz/modules.php?name=News&.file=comments&.sid=15910&.tid=209924
https://palmserver.cz/modules.php?name=News&.file=comments&.op=Reply&.pid=209924&.sid=15910
https://www.serialzone.cz/uzivatele/231581-lvvwfncqmbluqdws/
https://www.fdb.cz/clen/210595-njaokubbaccpbofl.html
https://starity.hu/profil/514160-jtcaacmejaqzcsnv/
https://files.fm/kyfopiqfsoktgjbd/info
https://www.metaculus.com/accounts/profile/230375/
https://tinhte.vn/profile/vximamvkgwzdzqyh.3170442/
https://www.outdoorproject.com/users/ybrhtydelbrtbvng-vioaoaqrjingimjg
https://bootstrapbay.com/user/mopeyjwccsbyanna
https://muckrack.com/kzozgdaqfqtlgvqw-mpxlhwpqnubvrmec/bio
https://www.bricklink.com/aboutMe.asp?u=jnwmfispzstsyzs
https://findaspring.org/members/eoexvevmqmspoumn/
https://schoolido.lu/user/bgobpzfwcmojejer/
https://www.tai-ji.net/members/profile/3324367/ucwupxpdcrrmrlkr.htm
https://www.thepetservicesweb.com/members/profile/3324368/oagaymmarajmbnwl.htm
https://www.greencarpetcleaningprescott.com/members/profile/3324369/nxejnnapgymzbqxj.htm
https://app.roll20.net/users/15263510/ekwvdwnzmavqwfrz
https://pbase.com/arbtmduhtpjldvsi
https://www.furaffinity.net/user/cyfyapkfggyvuurm
https://3rd-strike.com/author/fzjcyavimyszwlkp/
https://www.elephantjournal.com/profile/kwjvrzjeubfwxrdm/
https://www.facer.io/user/mmtkF7Dv2c
https://www.bitsdujour.com/profiles/ug5u3r
https://www.faneo.es/users/yuhuoqjtbbkfnium/
https://ilm.iou.edu.gm/members/jhwyxlhgfvdswnmo/
https://espritgames.com/members/45204433/
https://www.rwaq.org/users/s1kwm39he6-20241125204740
https://kurs.com.ua/profile/72156-vskcsphlbkqnkuht/?tab=field_core_pfield_11
https://www.egresadosudistrital.edu.co/virtualcourses/forums/users/otxylhumsmzxlowj
https://kbs.knutsford.edu.gh/profile/fcoxhsxojbgvsytp/
https://ucgp.jujuy.edu.ar/profile/lyyqhelvfdkodpcp/
https://lms.aimms.edu.pk/profile/cnihpmolmboenouu/
https://www.colmayor.edu.co/foro/profile/znvatagimuidzsdq/
https://alumni.cusat.ac.in/members/hltkcgkkadqwzksn/profile/
https://kerbalx.com/gxwktowwstfjmejg
https://www.halaltrip.com/user/profile/181368/bhgnxjkammutwbm/
https://learn.mystudyseries.co.nz/members/hhdnxgaajtfowtmn/
https://www.kaggle.com/shtuvjfjsshphpud
https://yamap.com/users/4252602
https://profile.hatena.ne.jp/onbtfqgngreznhgk/
https://hackmd.io/@dfyhtmilvvmpyncn
https://hackmd.io/@dfyhtmilvvmpyncn/ByYFPHQX1l
https://scrapbox.io/gtdqomdyiwqbnfzr/fqubxydgylmdepsu
https://www.deviantart.com/gjnkskwofarysijq/journal/sfkyqvkmkjetcqrz-1126547006
https://imovieslink.hashnode.dev/mnrwcyukgjbyhouo
https://dictanote.co/n/1115845/
https://dojour.us/e/38444-kzevqadqgkedclsb
https://www.divephotoguide.com/user/oywerzqcokdsyqpu
https://bulkwp.com/support-forums/users/agqsorywyrodjejq/
https://info.undp.org/docs/dao/UNSP2015/Lists/PostSurvey/Item/displayifs.aspx?ID=135361
http://sharkia.gov.eg/services/window/Lists/List/DispForm.aspx?ID=115558
http://monofeya.gov.eg/citizens/cases/Lists/List38/DispForm.aspx?ID=92650
http://www.alexandria.gov.eg/Lists/List30/DispForm.aspx?ID=84689
https://jsbin.com/gukowigapo/edit?html,output
https://kuku.lu/t1019b
https://wokwi.com/projects/415622445069991937
https://rextester.com/BRM89523
https://jsfiddle.net/z98hs7kn/
https://wow.curseforge.com/paste/057397c9
https://telegra.ph/ivhpvhgdccetpedb-11-26
https://graph.org/nzsoozhfkhbskfdz-11-26
https://te.legra.ph/laslsxqhwlgtupjp-11-26
https://snippet.host/ftowcb
https://pastelink.net/7bvzeqnb
https://freepaste.link/public/nkrrghlavq
https://freepaste.link/public/o45aggynba
https://yamcode.com/cftufzriqhkwnbwg
https://mlx.su/paste/view/8098c486
https://pastebin.com/JkUVvCkt
https://paste.tc/ngsmcxovzvpllpwm
https://paste.feed-the-beast.com/view/f963118d
https://paiza.io/projects/W2XM1EK4XJI-Jyr2GX227g
https://paste.rs/ynCTH.html

?
cgise cgise:

https://bento.me/the-deception-game-ep-7
https://bento.me/the-deception-game-ep-8
https://bento.me/the-deception-game-ep-9
https://bento.me/the-deception-game-ep-10
https://bento.me/the-deception-game-ep-11
https://bento.me/the-deception-game-ep-12
https://bento.me/the-deception-game-ep-13
https://bento.me/the-deception-game-ep-14
https://bento.me/the-deception-game-ep-15
https://bento.me/the-deception-game-ep-16
https://bento.me/the-deception-game-ep-17
https://bento.me/the-deception-game-ep-18
https://bento.me/the-deception-game-ep-19
https://bento.me/the-deception-game-ep-20
https://bento.me/the-scent-of-hers-ep-5
https://bento.me/the-scent-of-hers-ep-6
https://bento.me/the-scent-of-hers-ep-7
https://bento.me/the-scent-of-hers-ep-8
https://bento.me/the-scent-of-hers-ep-9
https://bento.me/the-scent-of-hers-ep-10
https://bento.me/the-scent-of-hers-ep-11
https://bento.me/the-scent-of-hers-ep-12
https://bento.me/the-scent-of-hers-ep-13
https://bento.me/the-scent-of-hers-ep-14
https://bento.me/the-scent-of-hers-ep-15
https://bento.me/the-scent-of-hers-ep-16
https://bento.me/mom-ped-sawan-ep-2
https://bento.me/mom-ped-sawan-ep-3
https://bento.me/mom-ped-sawan-ep-4
https://bento.me/mom-ped-sawan-ep-5
https://bento.me/mom-ped-sawan-ep-6
https://bento.me/mom-ped-sawan-ep-7
https://bento.me/mom-ped-sawan-ep-8
https://bento.me/mom-ped-sawan-ep-9
https://bento.me/mom-ped-sawan-ep-10
https://bento.me/mom-ped-sawan-ep-11
https://bento.me/mom-ped-sawan-ep-12
https://bento.me/mom-ped-sawan-ep-13
https://bento.me/mom-ped-sawan-ep-14
https://bento.me/mom-ped-sawan-ep-15
https://bento.me/mom-ped-sawan-ep-16
https://bento.me/mom-ped-sawan-ep-17
https://bento.me/mom-ped-sawan-ep-18
https://bento.me/mom-ped-sawan-ep-19
https://bento.me/game-of-love-ep-13
https://bento.me/game-of-love-ep-14
https://bento.me/game-of-love-ep-15
https://bento.me/game-of-love-ep-16
https://bento.me/game-of-love-ep-17
https://bento.me/game-of-love-ep-18
https://bento.me/game-of-love-ep-19
https://bento.me/game-of-love-ep-20
https://bento.me/game-of-love-ep-21
https://bento.me/game-of-love-ep-22
https://bento.me/game-of-love-ep-23
https://bento.me/game-of-love-ep-24
https://bento.me/game-of-love-ep-25
https://bento.me/perfect-10-liners-ep-5
https://bento.me/perfect-10-liners-ep-6
https://bento.me/perfect-10-liners-ep-7
https://bento.me/perfect-10-liners-ep-8
https://bento.me/perfect-10-liners-ep-9
https://bento.me/perfect-10-liners-ep-10
https://bento.me/perfect-10-liners-ep-11
https://bento.me/perfect-10-liners-ep-12
https://bento.me/perfect-10-liners-ep-13
https://bento.me/perfect-10-liners-ep-14
https://bento.me/fourever-you-ep-13
https://bento.me/fourever-you-ep-14
https://bento.me/fourever-you-ep-15
https://bento.me/fourever-you-ep-16
https://bento.me/love-sick-ep-11
https://bento.me/love-sick-ep-12
https://bento.me/love-sick-ep-13
https://bento.me/love-sick-ep-14
https://bento.me/love-sick-ep-15
https://bento.me/time-ep-5
https://bento.me/time-ep-6
https://bento.me/time-ep-7
https://bento.me/time-ep-8
https://bento.me/time-ep-9
https://bento.me/time-ep-10
https://bento.me/time-ep-11
https://bento.me/time-ep-12
https://bento.me/time-ep-13
https://bento.me/the-legend-of-nang-nak-ep-27
https://bento.me/the-legend-of-nang-nak-ep-28
https://bento.me/the-legend-of-nang-nak-ep-29
https://bento.me/the-legend-of-nang-nak-ep-30
https://bento.me/good-doctor-ep-13
https://bento.me/good-doctor-ep-14
https://bento.me/good-doctor-ep-15
https://bento.me/good-doctor-ep-16
https://bento.me/good-doctor-ep-17
https://bento.me/good-doctor-ep-18
https://bento.me/good-doctor-ep-19
https://bento.me/good-doctor-ep-20
https://bento.me/high-school-frenemy-ep-12
https://bento.me/high-school-frenemy-ep-13
https://bento.me/high-school-frenemy-ep-14
https://bento.me/high-school-frenemy-ep-15
https://bento.me/high-school-frenemy-ep-16
https://bento.me/the-heart-killers-ep-1
https://bento.me/the-heart-killers-ep-2
https://bento.me/the-heart-killers-ep-3
https://bento.me/the-heart-killers-ep-4
https://bento.me/the-heart-killers-ep-5
https://bento.me/the-heart-killers-ep-6
https://bento.me/the-heart-killers-ep-7
https://bento.me/the-heart-killers-ep-8
https://bento.me/the-heart-killers-ep-9
https://bento.me/the-heart-killers-ep-10
https://bento.me/the-musical-murder-ep-5
https://bento.me/the-musical-murder-ep-6
https://bento.me/the-musical-murder-ep-7
https://bento.me/the-musical-murder-ep-8
https://bento.me/the-musical-murder-ep-9
https://bento.me/the-musical-murder-ep-10
https://bento.me/the-musical-murder-ep-11
https://bento.me/the-musical-murder-ep-12
https://bento.me/the-musical-murder-ep-13
https://bento.me/the-musical-murder-ep-14
https://bento.me/petrichor-the-series-ep-1
https://bento.me/petrichor-the-series-ep-2
https://bento.me/petrichor-the-series-ep-3
https://bento.me/petrichor-the-series-ep-4
https://bento.me/petrichor-the-series-ep-5
https://bento.me/petrichor-the-series-ep-6
https://bento.me/petrichor-the-series-ep-7
https://bento.me/petrichor-the-series-ep-8
https://bento.me/love-and-scandal-ep-1
https://bento.me/love-and-scandal-ep-2
https://bento.me/love-and-scandal-ep-3
https://bento.me/love-and-scandal-ep-4
https://bento.me/love-and-scandal-ep-5
https://bento.me/love-and-scandal-ep-6
https://bento.me/love-and-scandal-ep-7
https://bento.me/love-and-scandal-ep-8
https://bento.me/love-and-scandal-ep-9
https://bento.me/love-and-scandal-ep-10
https://bento.me/the-fiery-priest-2-ep-5
https://bento.me/the-fiery-priest-2-ep-6
https://bento.me/the-fiery-priest-2-ep-7
https://bento.me/the-fiery-priest-2-ep-8
https://bento.me/the-fiery-priest-2-ep-9
https://bento.me/the-fiery-priest-2-ep-10
https://bento.me/the-fiery-priest-2-ep-11
https://bento.me/the-fiery-priest-2-ep-12
https://bento.me/winter-is-not-the-death-of-summer-but-the-birth-of-spring-ep-1
https://bento.me/winter-is-not-the-death-of-summer-but-the-birth-of-spring-ep-2
https://bento.me/winter-is-not-the-death-of-summer-but-the-birth-of-spring-ep-3
https://bento.me/winter-is-not-the-death-of-summer-but-the-birth-of-spring-ep-4
https://bento.me/winter-is-not-the-death-of-summer-but-the-birth-of-spring-ep-5

https://www.serialzone.cz/uzivatele/230951-cqdanghqigadlksv/
https://www.fdb.cz/clen/210207-jcjkgfrmctweekii.html
https://starity.hu/profil/511512-gmxvohilpccaceqk/
https://files.fm/pfnqnqatqtrrnsxf/info
https://www.metaculus.com/accounts/profile/228977/
https://tinhte.vn/profile/hxiqqjckpvkmlyka.3150588/
https://www.outdoorproject.com/users/tshcztkblwcjorxf-itqbfwlbloiqxgnt
https://muckrack.com/rwxnydahccuivvoi-ppasmlmrrrbzyzbu/bio
https://bootstrapbay.com/user/hnhufjxorarwflok
https://www.bricklink.com/aboutMe.asp?u=pbmckjgetvlaqxa
https://findaspring.org/members/yksncyykijnytufl/
https://schoolido.lu/user/bwsvgyxpogibegzm/
https://www.tai-ji.net/members/profile/3322543/ntipmxslsjzdngvo.htm
https://www.thepetservicesweb.com/members/profile/3322545/axibldknjkhkyqhp.htm
https://www.greencarpetcleaningprescott.com/members/profile/3322546/wnaitiexvlbiooba.htm
https://app.roll20.net/users/15231996/nakythpordyzhque
https://pbase.com/vgivnwucnepkchjk
https://www.furaffinity.net/user/uxlruhdbshitwmoi
https://www.beatstars.com/modaxkeidslmruea/about
https://3rd-strike.com/author/sjppwvyzaxvvlmgz/
https://www.elephantjournal.com/profile/pjuttstrahhyfjkt/
https://www.facer.io/user/3nCj0qi7cD
https://www.bitsdujour.com/profiles/aUDjeQ
https://espritgames.com/members/45146031/
https://www.rwaq.org/users/6mp0om3nqs-20241120201729
https://www.faneo.es/users/liibvokvcjhnfqjw/
https://ilm.iou.edu.gm/members/lwrgqsfeeduikthi/
https://kurs.com.ua/profile/71796-ehtqueiimmsesqss/?tab=field_core_pfield_11
https://www.egresadosudistrital.edu.co/virtualcourses/forums/users/qybygefopveiflpb
https://kbs.knutsford.edu.gh/profile/tsfwynddenimfvhk/
https://ucgp.jujuy.edu.ar/profile/iqxclboewejhymtt/
https://lms.aimms.edu.pk/profile/cozvmnepovsocqnf/
https://www.colmayor.edu.co/foro/profile/jkukqfizyldfsele/
https://alumni.cusat.ac.in/members/vbatpnrmbsllbxwu/profile/
https://learn.mystudyseries.co.nz/members/xmysjshqigruwwwp/
https://kerbalx.com/afxjuitkchkaajpg
https://www.halaltrip.com/user/profile/180213/fcswsfvhipzhkoh/
https://yamap.com/users/4240574
https://profile.hatena.ne.jp/olofglfuxzoziery/
https://hackmd.io/@ahkyxowgqnihqodh
https://hackmd.io/@ahkyxowgqnihqodh/HkRtesofyl
https://scrapbox.io/gtdqomdyiwqbnfzr/
https://scrapbox.io/gtdqomdyiwqbnfzr/fkbyfeyldbwiuand
https://www.deviantart.com/gjnkskwofarysijq
https://www.deviantart.com/gjnkskwofarysijq/posts
https://www.deviantart.com/gjnkskwofarysijq/journal/wljbbclujppcehho-1124412629
https://imovieslink.hashnode.dev/
https://imovieslink.hashnode.dev/cyvdsoranwqgdbqh
https://hashnode.com/@imovieslink
https://goseriestv.theblog.me/
https://goseriestv.theblog.me/posts/55872857
https://palmserver.cz/modules.php?name=News&.file=comments&.sid=2442&.tid=209763
https://palmserver.cz/modules.php?name=News&.file=comments&.op=Reply&.pid=209763&.sid=2442
https://dojour.us/e/38109-sytjoeyisgfxqzrt
https://info.undp.org/docs/dao/UNSP2015/Lists/PostSurvey/Item/displayifs.aspx?ID=130723
https://info.undp.org/docs/dao/UNSP2015/Lists/PostSurvey/Item/displayifs.aspx?List=6e146e50-e299-46da-a20e-b9e885dace29&.ID=130723
http://sharkia.gov.eg/services/window/Lists/List/DispForm.aspx?ID=111448
http://monofeya.gov.eg/citizens/cases/Lists/List38/DispForm.aspx?ID=92073
https://dictanote.co/n/1111569/
https://jsbin.com/cuzonatusa/edit?html,output
https://jsfiddle.net/n7arfd5q/
https://kuku.lu/t10184
https://wokwi.com/projects/415099513050936321
https://rextester.com/SAYL30095
https://wow.curseforge.com/paste/bf7e7d51
https://telegra.ph/rduemwscwjbzyuqm-11-20
https://graph.org/sirijulnmxnstssd-11-20
https://te.legra.ph/bgrksrjkwkdithgr-11-20
https://snippet.host/kaxree
https://pastelink.net/msgdz656
https://freepaste.link/public/flmiedmmuc
https://www.etextpad.com/ecmedqhkey
https://yamcode.com/rqxafrlezbxuzjbi
https://mlx.su/paste/view/df954a4d
https://pastebin.com/g24GWZwr
https://paste.tc/azfktmaiuqeyrzhb
https://paste.feed-the-beast.com/view/d51cf8c2
https://paiza.io/projects/bamwETJq4YuvCuQIqljjsg
https://paste.rs/FsZlo.html