Nexly AI Privacy Policy
Effective Date: June 3, 2025
Contact: 15442 Ventura Blvd., Ste 201-1552, Sherman Oaks, CALIFORNIA, 91403; info@nexly.eu
Overview: Nexly AI is committed to protecting your privacy. This policy describes how we collect, use, and protect your personal data on our website (https://nexly.eu). "User" refers to anyone using our site.
1. Data Controller: Nexly AI is the Data Controller, complying with relevant laws (GDPR, etc.) and privacy standards. We may use Processors/Subcontractors (selected for high data protection).
2. Data Collection:
Directly: When you provide it (e.g., orders, account registration, email subscriptions, surveys, contact us).
Passively: Through cookies and web beacons (e.g., site navigation, device info).
From Other Sources: Affiliates, business partners, and social media platforms.
3. Data Use Purposes:
Customer service.
Marketing & promotions.
Third-party social network interaction.
Personalization (with consent).
Order fulfillment.
Internal research, analytics, security, and account management.
4. Data Types:
Contact, Account Login, Device, Payment, Demographic, Third-Party Social Network, Site Usage, Feedback, Geolocation (with consent), and Inferences.
5. Consent: By using the site, you agree to this policy. Consent is required for certain actions.
6. Data Minimization: We only collect necessary data for specific purposes.
7. Data Sovereignty: Option to store data within your chosen region.
8. Retention: Data is generally retained for a maximum of 5 years, with exceptions based on data type, legal obligations, or dispute resolution.
9. Data Sharing: Data may be shared with employees, collaborators, subcontractors, processors, or suppliers committed to uphold the confidentiality and security of your data. We share data to fulfill marketing our products and providing our services, including:
Service Providers: Website hosting (Google), payment processing (Stripe), data analytics (Google Analytics), customer support (Freshdesk).
Cross-Contextual Behavioral Advertising: For targeted ads (opt-out available).
Credit reporting/Debt Collectors (under certain legal circumstances).
Legal Reasons and Mergers/Acquisitions.
10. Automated Processing: We use algorithms for personalization (product recommendations, content customization, targeted advertising). You can manage your AI preferences.
11. Data Protection Officer (DPO): Dempsey De Clerck is the DPO, ensuring compliance.
12. User Rights: You have the rights of Access, Rectification, Erasure, Restriction, Portability, Objection, and to Lodge a Complaint. Contact us at info@nexly.eu to exercise these rights.
13. Data Portability and Deletion:
Download your data: info@nexly.eu
Account deletion: info@nexly.eu
14. Cross-Border Data Transfers: We use Standard Contractual Clauses, Binding Corporate Rules, and Privacy Shield Frameworks to protect your data when transferred outside the EEA.
15. Automated Decision-Making and Profiling: We use automated decision-making. Contact info@nexly.eu to contest automated decisions.
16. Data Breach Notification: You and authorities will be notified of any data breaches.
17. Transparency in Algorithmic Processing: We explain how algorithms are used and give you control over algorithm-driven content.
18. Privacy Enhancing Technologies (PETs): We use Differential Privacy, Homomorphic Encryption, and Zero-Knowledge Proofs.
19. Data Ethics and AI: We are committed to Fairness, Transparency, Privacy by Design, Accountability, User Control, and Ethical Advertising.
20. Ethical AI Principles: Adherence to EU AI Act and global frameworks (OECD, Montreal Declaration).
21. Human Oversight: Nexly AI ensures human oversight of AI systems through a dedicated team of experts. This team:
Reviews and Approves AI Systems: The human oversight team reviews and approves any new AI system before deployment to ensure it adheres to ethical AI principles, privacy regulations, and technical safety standards.
Monitors AI Performance: The team actively monitors the performance of deployed AI systems, identifying potential risks and issues related to bias, fairness, privacy, and unintended consequences.
Addresses Concerns and Issues: They promptly address any identified concerns or issues, taking appropriate action to rectify problems or implement corrective measures.
Collaborates with Developers: The team collaborates with AI developers throughout the development lifecycle to ensure ethical considerations are integrated into design and implementation.
21.1 Human-in-the-Loop: To ensure responsible and ethical use of AI, we implement "human-in-the-loop" systems for critical decisions. This means that human experts are involved in reviewing and potentially overriding automated decisions made by AI algorithms. This is particularly relevant in situations where AI might have a significant impact on individual rights or interests, such as:
Personalized Recommendations: Human experts may review the recommendations made by our AI algorithms to ensure they are relevant, fair, and not based on discriminatory biases.
Content Customization: Human experts may review the content recommendations generated by our algorithms to ensure they align with our editorial guidelines and do not promote harmful or misleading information.
Automated Moderation: Human experts may review decisions made by our AI moderation systems to ensure they are accurate and fair, preventing the inappropriate removal or censorship of content.
22. Risk Assessment and Mitigation: Nexly AI shall conduct risk assessments for AI systems, particularly those deemed high-risk under the EU AI Act, and implement appropriate measures to mitigate identified risks to individuals, society, and fundamental rights. These assessments will consider potential risks such as:
Bias and Discrimination: We assess the potential for the AI system to unfairly discriminate against individuals or groups based on protected characteristics.
Privacy Breaches: We evaluate the AI system's potential to compromise user privacy and implement appropriate safeguards to protect personal data.
Unintended Consequences: We consider the potential for the AI system to have unintended negative consequences for individuals or society.
23. Data Quality and Bias Mitigation: Nexly AI shall ensure the quality and integrity of data used to train AI systems, actively mitigate biases in data and algorithms, and regularly monitor and audit AI systems for fairness and non-discrimination. We achieve this through:
Data Source Evaluation: We carefully select and evaluate data sources to minimize the risk of bias and ensure data quality. This involves assessing the source of data, its representativeness, and its potential for bias.
Bias Detection and Mitigation Techniques: We employ various techniques, such as differential privacy and fair representation algorithms, to identify and mitigate potential biases in our algorithms. These techniques are used to ensure that our algorithms are fair and equitable across different groups of users.
Regular Monitoring and Auditing: We conduct regular monitoring and auditing of our AI systems to detect and address potential biases, ensuring ongoing fairness and ethical use. This involves continuous assessment of our AI systems to ensure that they are functioning fairly and ethically.
24. Explainability and Transparency: Nexly AI shall provide clear and understandable explanations of AI-driven decisions to users, including the factors considered and the rationale behind recommendations or actions taken by AI systems. We aim to provide users with insights into how AI systems work and how their decisions are made.
25. User Consent and Control: Nexly AI shall obtain explicit consent from users for the use of AI systems that significantly impact their rights or interests, and provide users with meaningful control over their data and AI-driven experiences.
26. Data Protection and Privacy: Nexly AI shall ensure compliance with data protection regulations, including the GDPR, by implementing robust privacy measures, such as:
Data Anonymization: We employ anonymization techniques to remove personally identifiable information from data sets, ensuring that the data cannot be linked back to individuals.
Encryption: We use encryption to protect your personal data in transit and at rest, making it unreadable to unauthorized parties.
User-Centric Privacy Settings: We provide users with granular control over their privacy settings, allowing them to customize their preferences for data collection, processing, and sharing.
27. Accountability and Auditing: Nexly AI shall maintain records of AI systems, data used for training, and decision-making processes, and be able to demonstrate compliance with the EU AI Act through regular auditing and reporting mechanisms.
28. Prohibited Practices: Nexly AI shall refrain from engaging in prohibited AI practices outlined in the EU AI Act, such as:
Social Scoring: We will not use AI systems to create social scores that rank individuals based on their behavior or characteristics.
Indiscriminate Surveillance: We will not engage in indiscriminate surveillance practices that collect data on individuals without their consent or justification.
Manipulation of Vulnerable Individuals: We will not use AI systems to manipulate or exploit vulnerable individuals, such as children or people with disabilities.
29. Notification of Authorities: Nexly AI shall promptly notify relevant authorities of any significant incidents, malfunctions, or breaches involving AI systems that may affect the rights or safety of individuals or society.
30. Training and Certification: Nexly AI shall ensure that personnel involved in the development, deployment, and monitoring of AI systems receive adequate training on ethical AI principles, legal requirements, and best practices, and obtain appropriate certification where necessary.
31. Continuous Monitoring and Improvement: Nexly AI shall establish processes for continuous monitoring, evaluation, and improvement of AI systems to ensure ongoing compliance with the EU AI Act and evolving best practices in AI governance.
32. Consent Management Dashboard: Allows you to manage your consent preferences.
33. Privacy by Design: Your data privacy is considered throughout development.
34. User Education and Awareness: We provide resources to help you protect your data.
35. Ethical Advertising Practices: Opt-out option for personalized ads.
36. Use of Personal Data for AI Model Training: We use your data to improve our services. We handle this data with the same security, and you retain control over it. Contact info@nexly.eu.
36.1 Why We Use Your Personal Data for AI Model Training:
The primary purpose of using your personal data for AI model training is to improve your experience on our platform. By analyzing user interactions, behaviors, and preferences, we can develop AI-driven features and services that better align with your expectations. This helps us provide you with more relevant content, recommendations, and user interactions, ultimately enhancing your overall satisfaction with our platform.
36.2 Ensuring the Security and Compliance of Your Data:
Rest assured that we handle the data used for AI model training with the same level of security and compliance as outlined in this Privacy Policy. Your data is subject to rigorous security measures and privacy safeguards to protect it from unauthorized access, disclosure, and misuse.
36.3 Our Commitment to Ethical Data Use:
We are deeply committed to the responsible and ethical use of your personal data for AI model training. Our practices are rooted in fairness, transparency, and respect for your privacy rights. We continuously work to identify and mitigate biases in our AI algorithms to ensure that your data is used in a way that respects your individuality and protects against discrimination.
36.4 Your Control Over Data Usage:
We understand that you may have concerns about how your data is used. While we strive to use your data responsibly to enhance your experience, we also respect your preferences. If you wish to exercise greater control over the use of your data for AI model training or have specific preferences, please refer to our Consent Management Dashboard. Here, you can customize your privacy settings and preferences to align with your comfort level.
36.5 Data Anonymization and Pseudonymization:
To further protect your privacy, we employ data anonymization and pseudonymization techniques before using your data for AI model training. Anonymization removes personally identifiable information, while pseudonymization replaces it with unique identifiers. These techniques ensure that your identity remains safeguarded. For instance, we might replace your name with a random identifier or remove your email address from the dataset before using it for AI model training.
36.6 Compliance with Legal Requirements:
We adhere strictly to all relevant data protection laws and regulations in our jurisdiction when using personal data for AI model training. Your data privacy rights are always respected, and we ensure that our practices are compliant with applicable legal requirements.
36.7 Regular Audits and Data Retention:
Our commitment to transparency extends to regular audits of data used for AI model training to ensure compliance with our privacy and security standards. Additionally, we only retain personal data for AI model training purposes for as long as necessary to fulfill the intended objectives. When the data is no longer needed for this purpose, it will be promptly deleted.
37. Third-Party Audits and Certification:
38. Privacy Impact Assessments (PIAs):
We conduct Privacy Impact Assessments (PIAs) to rigorously assess and proactively mitigate risks associated with data processing activities, particularly those involving novel technologies or high-risk data processing. PIAs are conducted to:
Identify Risks: We carefully analyze data processing activities to identify potential risks to individuals' privacy, such as breaches, discrimination, or misuse of personal data.
Evaluate and Mitigate Risks: We evaluate the severity and likelihood of identified risks and develop appropriate mitigation strategies to address them effectively.
Monitor and Review: We regularly monitor and review the effectiveness of our mitigation strategies and make adjustments as needed.
39. Regular Transparency Reports:
40. Accessibility and Multilingual Support:
41. User Consent for Cookies and Tracking Technologies: Consent required, and you can manage your preferences.
42. Children's Privacy: Our services are not for those under 16. Contact us immediately if you believe we have collected data from a child.
43. Data Security Measures: We use encryption, access controls, firewalls, and regular assessments.
44. Data Retention Justifications: We retain data for the necessary duration to fulfill this Policy's purposes.
45. Algorithmic Impact Assessment (AIA):
Nexly AI conducts Algorithmic Impact Assessments (AIAs) for any AI system that poses a significant risk to individuals' rights or interests. The AIA process involves a multi-step approach:
Risk Identification: We analyze the proposed AI system to identify potential risks associated with its operation. These risks may include bias, discrimination, privacy violations, and potential negative societal impacts.
Risk Evaluation: We evaluate the severity and likelihood of identified risks, determining the necessary level of mitigation. This evaluation considers factors such as the potential harm to individuals, the scope of impact, and the feasibility of mitigation.
Mitigation Strategies: We develop and implement mitigation strategies to address identified risks. These strategies may include:
Data anonymization: Removing personally identifiable information from the data used to train the AI system to reduce the risk of privacy breaches.
Algorithmic fairness techniques: Employing methods to ensure that the AI system treats individuals fairly and equitably, regardless of their protected characteristics.
Human oversight mechanisms: Implementing human review and intervention processes to ensure that AI decisions are ethical and appropriate.
Transparent reporting: Providing clear and understandable explanations of how AI systems operate and the decisions they make.
Monitoring and Review: We monitor the AI system's performance and impact over time, conducting periodic reviews and adjustments to ensure the continued effectiveness of mitigation strategies. This ongoing monitoring ensures that the AI system remains compliant with ethical and legal requirements.
Criteria for AI System Evaluation:
Our AIAs consider the following factors when evaluating risks and developing mitigation strategies:
Data Quality and Bias: We assess the quality and potential biases within the data used to train the AI system. This includes evaluating the source of the data, its representativeness, and its potential for discrimination.
Algorithmic Transparency: We evaluate the transparency and explainability of the AI system's decision-making process. This involves ensuring that the AI system's decisions are understandable and can be explained to users.
Fairness and Discrimination: We assess the potential for the AI system to discriminate against individuals or groups based on protected characteristics. This involves testing the AI system to ensure that it treats individuals fairly, regardless of their race, gender, ethnicity, religion, or other protected attributes.
Privacy Impact: We analyze the AI system's impact on user privacy and implement appropriate safeguards to protect personal data. This involves assessing the potential for the AI system to collect, process, or disclose personal data without proper consent or justification.
Social and Societal Impact: We consider the broader societal implications of the AI system and its potential to influence human behavior. This includes considering the potential for the AI system to be used for harmful purposes or to reinforce existing social biases.
By conducting rigorous AIAs, we ensure that our AI systems operate ethically and responsibly, minimizing risks to individuals and society while maximizing the benefits of AI innovation.
46. Human-in-the-Loop Systems: Nexly AI shall implement human-in-the-loop systems where appropriate, allowing human intervention in AI-driven processes to review and override automated decisions, especially in cases of significant impact on individuals' rights or interests. This ensures that humans are involved in the decision-making process, particularly for critical decisions or situations where human judgment is necessary.
47. Accessibility in AI Systems: Nexly AI shall ensure that AI systems are designed and developed with accessibility considerations, ensuring equal access and usability for individuals with disabilities. We aim to make our AI systems inclusive and accessible to all users. This involves using design principles that accommodate users with different abilities and ensuring that our AI systems are compatible with assistive technologies.
48. Procurement Requirements for AI Systems: Nexly AI shall establish procurement requirements for AI systems, ensuring that third-party AI solutions comply with ethical AI principles, data protection regulations, and the EU AI Act before integration into Nexly AI's infrastructure. We prioritize ethical and compliant AI solutions when acquiring third-party AI systems. This involves carefully vetting AI vendors and ensuring that their products and services align with our ethical and legal standards.
49. Collaboration with Regulatory Authorities: Nexly AI shall collaborate with regulatory authorities, data protection agencies, and other relevant stakeholders to promote the responsible use of AI and ensure compliance with the EU AI Act and other applicable regulations. This includes engaging in dialogue with regulatory bodies, participating in industry working groups, and proactively seeking guidance on emerging AI issues.
50. Public Transparency Reports on AI Systems: Nexly AI shall publish regular transparency reports on AI systems, providing detailed information about the design, functionality, and impact of AI systems on individuals and society. These reports will promote accountability and transparency in our AI practices.
51. Ethical Review Board: Nexly AI shall establish an ethical review board comprising experts in AI ethics, data protection, and human rights to provide guidance and oversight on ethical AI practices and ensure alignment with the EU AI Act. This board will play a critical role in reviewing and approving AI systems and ensuring adherence to ethical standards.
Functions of the AI Review Board:
Review and Approval of AI Systems: The board will conduct thorough reviews of all AI systems developed or deployed by Nexly AI, evaluating their ethical implications, potential risks, and compliance with relevant regulations.
Guidance on Ethical AI Practices: The board will provide guidance and advice to Nexly AI on best practices for ethical AI development, deployment, and use, ensuring alignment with ethical principles and legal requirements.
Monitoring and Oversight: The board will monitor the ongoing ethical performance of AI systems, identify potential risks or issues, and provide recommendations for mitigation or remediation.
Independent Assessment: The board will operate independently from Nexly AI's operational teams, providing an unbiased perspective on AI ethical considerations and ensuring that AI practices are aligned with ethical values.
Criteria for Selecting Board Members:
Expertise: The board members will be chosen for their proven expertise in AI ethics, data protection, human rights, and relevant legal frameworks.
Diversity: The board will strive for diversity in its membership, including perspectives from different disciplines, backgrounds, and geographical locations.
Independence: Board members will be chosen for their independence from Nexly AI's operational teams and any conflicts of interest.
Process for Reviewing AI Systems:
Initial Assessment: The board will review the proposed AI system's design, intended use, data sources, potential risks, and mitigation strategies.
Ethical Analysis: The board will assess the AI system's compliance with ethical principles, including fairness, transparency, accountability, and human oversight.
Risk Assessment: The board will conduct a thorough risk assessment to identify and evaluate potential harms to individuals, society, or fundamental rights.
Recommendations and Approval: The board will provide recommendations for improvements or modifications to the AI system and will ultimately decide whether to approve its deployment.
52. Continuous Ethical AI Training: At Nexly AI, we are committed to fostering a culture of ethical AI development and deployment. We believe that continuous education and training are essential to ensuring that our AI systems are developed and used responsibly. We accomplish this through:
Mandatory Ethical AI Training: All employees involved in any aspect of AI development, deployment, or monitoring, from engineers to product managers, are required to complete comprehensive training programs on ethical AI principles and best practices. This training covers topics such as:
AI Ethics Frameworks: Understanding foundational principles of fairness, transparency, accountability, privacy, and non-discrimination in AI development and deployment.
Bias Detection and Mitigation: Recognizing and mitigating biases in AI algorithms and training data to ensure fairness and equity.
Privacy and Security: Understanding the implications of AI on data privacy and implementing appropriate safeguards to protect user data.
Explainability and Transparency: Understanding the importance of making AI systems explainable and transparent, enabling users to understand how AI decisions are made.
Responsible AI Use Cases: Identifying and evaluating potential ethical risks and benefits of different AI applications, ensuring that AI is used responsibly and for societal good.
Ongoing Professional Development: We encourage and support continuous learning and development through:
Internal Workshops and Seminars: Regularly hosting workshops and seminars led by internal experts and external thought leaders on emerging trends in ethical AI, new research findings, and best practices.
External Training and Certifications: Providing opportunities for employees to pursue external training programs and certifications in ethical AI, such as the "Responsible AI Practitioner" certification or other relevant programs.
AI Ethics Review Boards: Establishing dedicated AI ethics review boards composed of internal and external experts to provide guidance, oversight, and independent assessments of AI systems and projects.
Embedding Ethical AI Considerations in Product Development: We integrate ethical considerations into the entire product development lifecycle:
Design Thinking Workshops: Ensuring that ethical AI principles are embedded in design thinking workshops from the initial ideation phase.
Code Reviews and Audits: Conducting regular code reviews and audits to identify and address potential ethical risks and biases in AI algorithms.
User Feedback and Testing: Actively seeking user feedback and conducting user testing to assess the ethical implications of AI systems and make necessary adjustments.
Continuous Monitoring and Evaluation: We regularly evaluate our training programs and update them based on:
Feedback from Employees: Seeking feedback from employees to understand the effectiveness of the training and identify areas for improvement.
Industry Best Practices: Staying abreast of evolving ethical AI standards and best practices set by regulatory bodies and industry organizations.
New Research Findings: Incorporating new research findings and technological advancements in AI ethics into our training programs.
By prioritizing continuous ethical AI training, we strive to ensure that our employees are equipped with the knowledge and skills to develop and deploy AI responsibly and ethically. This commitment to ongoing education and professional development is crucial for building trust with our users and fostering a responsible AI ecosystem.
53. Community Engagement and Feedback: Nexly AI shall actively engage with the community and solicit feedback on AI systems' impact, inviting input from users, stakeholders, and advocacy groups to inform AI governance practices and decision-making. This involves creating channels for user feedback, engaging with relevant stakeholders, and participating in public dialogues about the impact of AI.
54. Contact: For any privacy concerns, contact info@nexly.eu.
Updates: This policy may change; check this page regularly.