Page cover
Nexly Privacy Policy

Privacy Policy

Read our privacy practices

Privacy Policy

Effective Date: July 24, 2025

Nexly is committed to protecting all personal data. This Policy outlines how personal information is collected, processed, and safeguarded when Users natural or legal persons engaging with the Site, https://www.nexly.eu interact with our services.

1. Governance

As Data Controller, Nexly ensures all processing complies with:

  • GDPR: EU Regulation 2016/679 within the EEA.
  • National Law: Domestic statutes protecting personal data rights.
  • Global Standards: APEC CBPRs and OECD principles for international consistency.

Third-party Processors may be engaged for hosting, analytics, or support, all vetted to ensure strict compliance and security.

  • Data types collected and processing rationale;
  • Purposes, storage, and retention;
  • User rights and remedies;
  • Technical and organizational safeguards;
  • Cross-border transfer mechanisms;
  • Procedures for inquiries and exercising rights.

Privacy by design and by default is embedded across operations. Engagement presumes informed consent where required; this Policy may be updated as regulations evolve.

2. Processing of Personal Data

At Nexly, protecting your personal data is central to our operations. We collect, store, and process data only for specific, legitimate purposes, ensuring it is handled securely, transparently, and proportionately. Our processing practices are guided by principles of accountability, minimization, and user control, ensuring that your data is used to provide, optimize, and personalize services, comply with legal obligations, or support legitimate business interests.

2.1 Direct Collection

We collect personal data directly from you when you voluntarily provide it through our platforms, communications, or services. Key examples include:

  • Placing Orders: Name, email, shipping/billing addresses, phone, and payment details for secure processing, fulfillment, and support.
    Lawful Basis: Performance of a contract; legitimate interest in fulfilling orders securely.
  • Account Registration: Name, email, password, username, and optional profile info for secure access and personalization.
    Lawful Basis: Performance of a contract; legitimate interest in account management.
  • Loyalty Programs: Name, email, and purchase history for membership and tailored offers.
    Lawful Basis: Consent; legitimate interest in engagement.
  • Email Subscriptions: Email and preferences for newsletters and updates.
    Lawful Basis: Consent.
  • Surveys and Feedback: Demographics, preferences, and feedback to improve products/services.
    Lawful Basis: Legitimate interest; consent for marketing.
  • Customer Support: Name, contact info, and messages to provide assistance.
    Lawful Basis: Legitimate interest; performance of a contract.

2.2 Passive Collection

We automatically collect data as you interact with our digital platforms to understand behavior, improve functionality, personalize experiences, and maintain security. Types of passive data collected include:

2.2.1 Types of Data Collected
  • Platform Usage and Navigation: Pages viewed, clicks, search queries, session duration.
    Purpose: Optimize usability and performance.
    Lawful Basis: Legitimate interest.
  • Content Engagement: Interactions with articles, videos, product pages, likes, shares, scroll depth.
    Purpose: Personalize content and measure engagement.
    Lawful Basis: Legitimate interest.
  • Device and Technical Information: IP address, device type, browser, OS, screen resolution, network info, sensors.
    Purpose: Compatibility, performance, security.
    Lawful Basis: Legitimate interest.
  • Cookies, Pixels, and Tracking: Analytics and marketing tools for monitoring and personalization.
    Purpose: Service improvement and personalization.
    Lawful Basis: Legitimate interest; consent where required.
  • Cross-Device and Third-Party Data: Data from multiple devices or partners.
    Purpose: Unified experience and optimized delivery.
    Lawful Basis: Legitimate interest; consent where required.
  • Error and Performance Data: Crash reports, error logs, and performance metrics.
    Purpose: Maintain platform reliability.
    Lawful Basis: Legitimate interest.
2.2.2 Methods of Collection
  • Cookies, pixels, tags, local storage, embedded SDKs
  • Automated logging of server, network, and API interactions
  • Analytics and monitoring tools integrated into platforms
  • Device and sensor signals, e.g., location and motion sensors
2.2.3 How Passive Data Is Used
  • Analyze behavior to optimize platform functionality
  • Deliver personalized content and recommendations
  • Ensure cross-device compatibility and performance
  • Detect and prevent fraud and security threats
  • Measure campaign and platform feature effectiveness
  • Provide seamless cross-device experiences
2.2.4 Data Protection and Compliance

Passive data is processed in accordance with applicable privacy laws, with robust technical and organizational measures including encryption, access controls, monitoring, and anonymization where possible. Data is retained only as long as necessary for its intended purpose.

2.3 Collection from Other Sources

We may obtain data from affiliates, partners, public sources, social media, and data aggregators. Examples include:

  • Interactions with third-party advertisements or social networks
  • Publicly available social profile information
  • Data shared by partners for integrated services or rewards

Lawful Basis: Legitimate interest; consent where required.
Safeguards: Third-party sources are vetted for GDPR and privacy compliance.

2.4 User Control and Transparency

Nexly provides robust controls over personal data, including access, updates, deletion, cookie preferences, marketing opt-outs, and dashboards with transparency reports.

Through these measures, Nexly ensures purposeful, secure, transparent, and user-centric processing.

3. Purpose of Personal Data Processing

At Nexly, we process personal data with a commitment to transparency, proportionality, and respect for your rights. Each processing activity serves a clearly defined and legitimate purpose, aligned with the principles of the GDPR (Articles 5 and 6). We ensure that data is collected and used only when necessary to provide, improve, or personalize our services, support lawful interests, or comply with legal obligations. Below we provide a detailed overview of the primary purposes, lawful bases, and safeguards applied to protect your privacy.

3.1 Customer Service

We use personal data to deliver efficient, responsive, and personalized customer support. Examples include:

  • Responding to inquiries, service requests, and troubleshooting issues
  • Investigating and resolving complaints or disputes
  • Enhancing satisfaction, loyalty, and long-term engagement through tailored support

Lawful Basis: Performance of a contract; legitimate interest in providing quality customer care.
Safeguards: Access restricted to trained support personnel under confidentiality agreements; data only used for addressing requests.

3.2 Contests, Marketing, and Promotions

Personal data enables Nexly to inform you about products, services, promotions, and opportunities. Examples include:

  • Email newsletters, product announcements, and promotional campaigns
  • Social media and digital advertising campaigns tailored to your interests
  • Notifications through websites or apps regarding contests, offers, or events

Lawful Basis: Consent (for electronic marketing); legitimate interest in promoting our services.
User Control: You can opt out anytime via unsubscribe links, account settings, or communication preferences.

3.3 Third-Party Social Networks

When interacting with Nexly through social media features (e.g., “Share” or “Like” buttons), personal data may be processed to enhance engagement and enable targeted promotions. Limited profile information may be used to personalize experiences based on your interactions.

Lawful Basis: Consent via social media platform.
Safeguards: Processing limited to shared info; subject to your social media privacy settings.

3.4 Personalization

With explicit consent, personal data enables tailored experiences. Examples include:

  • Personalized product recommendations: Based on browsing history, past purchases, and preferences
  • Content customization: Displaying articles, videos, and features aligned with your interests
  • Targeted advertising: Delivering relevant ads while respecting ad preferences and cookie settings

Lawful Basis: Consent for profiling and targeted marketing.
User Control: Opt out anytime via Cookie Settings, Ad Choices, or account preferences.

3.5 Order Fulfillment

Personal data is required to process and deliver orders securely. Activities include:

  • Confirming and updating order status
  • Verifying identity to prevent unauthorized access
  • Detecting, investigating, and preventing fraudulent or suspicious activities

Lawful Basis: Performance of a contract; legal obligation (e.g., tax compliance).
Safeguards: Payment info encrypted and processed only by PCI-DSS certified providers; access limited to authorized personnel.

3.6 Other Operational Purposes

Personal data may also be processed to support broader operational, analytical, and security objectives:

  • Conducting research and analysis to improve products and services
  • Advanced analytics to understand user behavior and enhance usability
  • Strengthening platform security and protecting against cyber threats
  • Managing user accounts, authentication, and access controls
  • Driving continuous innovation, quality improvements, and operational efficiency

Lawful Basis: Legitimate interest in service improvement and platform security.
Safeguards: Data anonymized or aggregated wherever possible; strict access controls applied.

3.7 Summary

Each processing purpose at Nexly is guided by necessity, proportionality, and GDPR compliance. Users retain control over their data, including updating preferences, withdrawing consent, and exercising rights. Technical and organizational measures encryption, access restrictions, and monitoring ensure privacy and security at all times.

4. Types of Personal Data Processed

At Nexly, we process different categories of personal data to deliver secure, personalized, and high-quality services. Each category serves specific purposes, is collected under a lawful basis, and is safeguarded in accordance with GDPR requirements. We adhere to principles of data minimization, transparency, and proportionality, ensuring you have full visibility and control over how your data is used.

4.1 Contact Data

  • Name
  • Postal address
  • Email address
  • Phone number
  • Social media handles (e.g., Facebook, LinkedIn)

Purpose: Customer service, account correspondence, contractual obligations, and updates.
Lawful Basis: Performance of a contract; legitimate interest in communication.
Safeguards: Encrypted storage, strict access controls, limited to authorized personnel.

4.2 Account Login Data

  • Login ID / email address
  • Screen name
  • Password (hashed and salted)
  • Security questions and answers

Purpose: Authentication, account security, fraud prevention.
Lawful Basis: Performance of a contract; legitimate interest in security.
Safeguards: Industry-standard encryption, two-factor authentication, strict access controls.

4.3 Device Data

  • IP address
  • Device identifiers (cookies, advertising IDs)
  • Operating system and browser type/version
  • Screen resolution and system settings

Purpose: Website/app functionality, fraud detection, performance optimization.
Lawful Basis: Legitimate interest in security and platform functionality; consent for cookies and tracking technologies.
Safeguards: Data minimization, retention limited to operational necessity, anonymization where feasible.

4.4 Payment Data

  • Credit/debit card details
  • Alternative payment methods (e.g., PayPal, Apple Pay)
  • Transaction and payment history

Purpose: Payment processing, fraud prevention, regulatory compliance.
Lawful Basis: Performance of a contract; legal obligation.
Safeguards: PCI-DSS compliance, end-to-end encryption, tokenization, restricted access.

4.5 Demographic Data

  • Gender
  • Age range
  • Geographic region
  • Interests, hobbies, preferences

Purpose: Personalization, aggregated analytics, product/service development.
Lawful Basis: Consent for optional fields; legitimate interest in service improvement.
Safeguards: Aggregation and anonymization wherever possible.

4.6 Third-Party Social Network Data

  • Name
  • Email address
  • Profile picture
  • Public posts or interactions (if authorized)

Purpose: Seamless login, social engagement, enhanced user experience.
Lawful Basis: Consent at the time of linking your account.
Safeguards: Restricted to explicitly authorized fields; limited processing.

4.7 Site Usage Data

  • Pages viewed and session duration
  • Links clicked and search queries
  • Scroll depth and content engagement metrics
  • Navigation patterns and click paths

Purpose: Improve navigation, measure engagement, optimize performance, enhance user experience.
Lawful Basis: Legitimate interest in analytics; consent for cookies and tracking.
Safeguards: Anonymization for analytics, limited retention, secure storage.

4.8 Feedback Data

  • Product/service reviews
  • Survey responses and feedback forms
  • Customer support interactions

Purpose: Service improvement, quality assurance, dispute resolution.
Lawful Basis: Legitimate interest; consent for surveys or marketing feedback.
Safeguards: Restricted access; anonymization for trend analysis.

4.9 Geolocation Data

  • Approximate device-based location
  • GPS coordinates (if enabled)

Purpose: Fraud prevention, location-based personalization, targeted marketing.
Lawful Basis: Explicit consent (opt-in).
Safeguards: Granular opt-out controls, short retention periods, anonymization when aggregated.

4.10 Inferences

  • Purchase history analysis
  • Usage and engagement patterns
  • Demographic and behavioral profiling

Purpose: Personalization, predictive analytics, service enhancement.
Lawful Basis: Legitimate interest; consent for profiling.
Safeguards: Algorithmic fairness testing, opt-out mechanisms, anonymization for aggregated insights.

Summary: Nexly carefully categorizes all personal data, limits processing to necessary purposes, and implements robust safeguards. Users retain full GDPR rights, including access, correction, deletion, and restriction of their data.

By accessing and using the Site, you acknowledge having read this Privacy Policy and willingly provided free, specific, informed, and unequivocal consent to the processing of your personal data as outlined herein.

Consent is obtained through active actions, such as checking the privacy policy box via a hypertext link. Certain actions on the Site or establishing a contractual relationship with Nexly require consent as a mandatory condition.

Important: Withdrawal of consent does not affect the lawfulness of prior processing. You may manage or withdraw consent at any time via your account settings or by contacting Nexly directly.

6. Data Minimization and Purpose Limitation

We are committed to the principle of data minimization, collecting and processing only the personal data that is strictly necessary for the specific purposes outlined in this policy.

Nexly ensures that no personal data is collected or processed beyond what is essential for the intended purpose, in compliance with GDPR principles of necessity and proportionality.

Key Principle: Limiting data collection reduces privacy risks, enhances security, and ensures that users retain control over their personal information. All data collected is purposeful, relevant, and strictly necessary.

7. Data Sovereignty and Localization

Nexly recognizes the importance of data sovereignty and user control. Users are offered the option to store their data within their chosen geographical region, ensuring compliance with regional data protection laws and regulations, such as the GDPR or the California Consumer Privacy Act (CCPA).

Key Feature: Regional data storage empowers users, enhances legal compliance, and ensures that personal data remains subject to the privacy standards of the selected jurisdiction.

8. Retention Period

At Nexly, we strictly adhere to the principle of storage limitation under Article 13(2) of the GDPR. Personal data is retained only as long as necessary to fulfill its collection purposes or comply with legal, regulatory, or contractual obligations. Once no longer required, data is securely deleted, anonymized, or aggregated to prevent identification.

Retention periods vary by data type, operational needs, and statutory requirements. Generally, most personal data is retained for five years after the last interaction, while some data (e.g., tax records) may be retained longer, and technical logs may be retained for shorter periods.

Retention Summary: Data retention is carefully tailored to each category to balance compliance, operational needs, and user privacy.
  • Contact Data: Up to five years after last interaction to manage customer relationships and recordkeeping.
  • Account Login Data: Up to five years post-account closure for fraud prevention and dispute resolution.
  • Device Data: Six months for security investigations and performance monitoring.
  • Payment Data: Duration of transaction plus five years for tax and regulatory compliance.
  • Demographic Data: Up to five years for personalization and analytics.
  • Third-Party Social Network Data: Up to five years or until integration revoked.
  • Site Usage Data: Six months for analytics and UX improvement.
  • Feedback Data: Up to five years for service quality and traceability.
  • Geolocation Data: Six months for personalization and fraud prevention.
  • Inferences: Up to five years or until consent withdrawal for AI-driven enhancements.
Exceptional Retention: Some data may be retained longer for legal obligations, ongoing claims, or explicit consent. All extended retention is reviewed periodically to ensure proportionality.

Data Deletion and Anonymization: Upon expiration, data is securely erased or anonymized. Anonymized data may remain for research, analytics, or product development, without any link to identifiable individuals.

Transparency and User Control: You retain full rights to access, correct, or delete personal data. Retention policies balance operational needs with user privacy, ensuring secure, high-quality service.

9. Data Recipients and Disclosure

At Nexly, we recognize that responsible management of personal data extends beyond internal handling to careful oversight of external disclosures. Personal data is shared only with authorized recipients who are contractually or legally obligated to maintain confidentiality, security, and lawful processing.

All disclosures serve legitimate business purposes, such as service delivery, transaction processing, security enforcement, analytics, and user experience enhancement. Compliance is governed by applicable data protection laws including the GDPR and Nexly’s internal privacy policies emphasizing transparency, accountability, and proportionality.

9.1 Service Providers and Data Processors

Nexly engages carefully vetted third-party service providers acting solely as data processors under formal Data Processing Agreements (DPAs) enforcing GDPR compliance, security measures, and strict data use limitations.

  • Cloud Infrastructure and Hosting (e.g., Google Cloud, AWS): Processes IP addresses, device identifiers, and browsing activity to ensure platform availability, scalability, and cybersecurity.
  • Payment Processing (e.g., Stripe, PayPal): Handles sensitive financial information strictly for transactions, fraud prevention, and regulatory compliance.
  • Data Analytics (e.g., Google Analytics, Mixpanel): Processes aggregated or anonymized data to evaluate user engagement and service performance; no PII is disclosed without explicit consent.
  • Customer Support Platforms (e.g., Freshdesk, Zendesk): Receives contact details and case-related data for efficient support and issue resolution.
All providers undergo continuous due diligence, including security audits and compliance checks. A current list of primary service providers is available for transparency.

9.2 Behavioral and Interest-Based Advertising

Nexly may share non-sensitive data with advertising partners to deliver personalized, interest-based advertising through pseudonymous profiles derived from browsing history, device identifiers, and interaction patterns.

  • You can opt out of personalized advertising anytime via our Ad Choices page.
  • Cookie and tracking preferences can be configured via the Cookie Settings page.
  • Sensitive personal data (health, biometrics, precise location) is never shared for advertising purposes.
  • Browser-level “Do Not Track” signals and emerging privacy standards are continuously evaluated.

9.3 Credit Reporting and Debt Collection

Limited personal and financial data may be shared with credit reporting agencies to assess creditworthiness, or with licensed debt collection agencies for overdue payments. All disclosures are scoped, documented, and legally overseen to ensure proportionality.

  • Compliance with statutory obligations, regulatory requirements, or lawful governmental requests.
  • Response to subpoenas, court orders, or legal proceedings.
  • Protection of Nexly’s, user’s, or public rights, safety, property, or privacy.
  • Corporate restructuring (mergers, acquisitions, asset transfers, bankruptcy) with equivalent safeguards and user notifications whenever feasible.

Across all disclosures, recipients must process data only for the explicit purpose it was shared. Disclosures follow necessity and proportionality principles your data is never sold or used for unrelated purposes. Marketing or prospecting requires prior clear notice and, where required, explicit user consent.

Safeguards and Accountability: All recipients implement technical and organizational measures equivalent to Nexly’s standards, including encryption, access control, and audits. Regular reviews ensure personal data remains secure, and risks are proactively mitigated.

10. Automated Processing

At Nexly, we leverage automated processing, including artificial intelligence (AI) and machine learning algorithms, to enhance your experience, improve platform functionality, and deliver personalized content and services. Automated processing involves systematic analysis of data such as browsing behavior, purchase history, engagement metrics, and interaction patterns to generate insights and actionable outputs. These processes are transparent, lawful, and respectful of your privacy, in accordance with GDPR Articles 22 and 5.

10.1 Personalized Recommendations

AI algorithms analyze your activity to provide tailored product and service recommendations, such as:

  • Suggesting travel accessories or deals after viewing travel-related products.
  • Recommending complementary products based on past purchases.
  • Highlighting relevant services, bundles, or promotions aligned with your interests.

Purpose: Improve relevance, convenience, and user satisfaction while facilitating discovery.
Lawful Basis: Legitimate interest in service personalization; explicit consent where required.
User Control: Manage recommendation preferences or opt out entirely via account settings or AI personalization controls.

10.2 Content Customization

Automated processes adapt content presentation based on your interests and prior interactions, including:

  • Displaying articles or blog posts relevant to topics previously engaged with, such as online privacy or cybersecurity.
  • Prioritizing news, tutorials, or promotional content aligned with demonstrated preferences.
  • Optimizing the sequence, layout, and visibility of platform features to match usage patterns.

Purpose: Enhance engagement, streamline navigation, and provide a more relevant platform experience.
Lawful Basis: Legitimate interest in content personalization; explicit consent where required.
User Control: Adjust content preferences or disable automated content recommendations through account settings or platform controls.

10.3 AI Transparency, Explainability, and User Control

Nexly employs AI and machine learning technologies, including recommendation engines, predictive analytics, and personalization algorithms. Data processed may include:

  • Browsing and purchase history
  • Interaction and engagement patterns
  • Preferences explicitly provided by you
  • Aggregated and anonymized behavioral data

Transparency and control mechanisms include:

  • Explainability: Users are informed about the purpose of automated processing and data types used.
  • Opt-Out Options: Users may disable AI-driven personalization for recommendations, ads, or content customization via account settings or platform controls.
  • Data Minimization: Only necessary data is processed; irrelevant or sensitive personal data is excluded unless explicitly consented.
  • Fairness and Bias Mitigation: Automated models are audited to detect and reduce bias, ensuring equitable outcomes in recommendations and content delivery.

Purpose: Deliver relevant, timely, and useful content and product recommendations; enhance engagement; optimize platform usability.
Lawful Basis: Legitimate interest in service personalization and user experience; explicit consent where required.
Safeguards: Transparent information, opt-out mechanisms, and technical safeguards to maintain user control, privacy, and fairness.

11. Data Protection Officer (DPO)

Delia Lazarescu serves as the Data Protection Officer (DPO) at Nexly, responsible for ensuring strict compliance with national and supranational regulations governing the collection, storage, and processing of personal data. The DPO oversees all data protection activities, guarantees the highest levels of data security, and acts as the primary point of contact for inquiries or concerns related to personal data handling.

Contacting the DPO: Users can reach the DPO for questions regarding privacy rights, data access requests, or complaints about data processing practices. The DPO ensures independent oversight and transparency in all privacy-related matters.

12. User Rights

At Nexly, we are fully committed to upholding your fundamental data protection rights under the General Data Protection Regulation (GDPR) and other applicable privacy laws. We empower users to exercise control over their personal data at any time, providing clear mechanisms for access, correction, deletion, and other rights. Requests are handled promptly, transparently, and securely. You can exercise your rights or seek further information by contacting our Data Protection Team at info@nexly.eu.

Overview: You have multiple rights under GDPR, including access, rectification, erasure, restriction, portability, objection, and the right to lodge complaints. Nexly ensures these rights are actionable, protected, and respected.

12.1 Right of Access

  • Obtain confirmation whether we process your personal data.
  • Receive a copy of your data and detailed information on its use.
  • Know the categories of recipients and retention periods.
  • Learn about automated decision-making, including profiling, and its effects.

12.2 Right to Rectification

Nexly updates inaccurate, incomplete, or outdated data promptly and, where applicable, notifies third parties to maintain accuracy and consistency.

12.3 Right to Erasure (“Right to be Forgotten”)

Request deletion of personal data when it is no longer necessary or consent is withdrawn, subject to legal or regulatory obligations.

12.4 Right to Restriction of Processing

  • When data accuracy is contested, pending verification.
  • When you object and we are evaluating grounds for continued processing.
  • When processing is unlawful and restriction is preferred over erasure.

During restriction, your data is securely stored and processed only with consent or as legally required.

12.5 Right to Data Portability

Receive personal data in a structured, machine-readable format and, where feasible, transfer it securely to another controller.

12.6 Right to Object and Automated Decision-Making

  • Object to processing for particular situations, including marketing and profiling.
  • Request human intervention in automated decision-making.
  • Express viewpoints or contest decisions produced by automated systems.

12.7 Right to Lodge a Complaint

Lodge complaints with your local Data Protection Authority (DPA) or pursue judicial remedies. Nexly cooperates fully with regulatory authorities to restore your rights.

12.8 Operational Safeguards and Accessibility

  • Dedicated channels for rights requests and inquiries.
  • Protocols to verify identity and prevent unauthorized access.
  • Documentation of all actions taken to ensure accountability.
  • Ongoing staff training to handle requests in accordance with GDPR and best practices.

Summary: Nexly not only meets legal obligations but fosters a culture of trust, transparency, and accountability. Exercising your rights is central to ensuring you remain in full control of your personal data.

13. Data Portability and Deletion

Nexly is committed to empowering users with full control over their personal data. In line with the General Data Protection Regulation (GDPR), we provide robust mechanisms for both data portability and account/data deletion. These features are secure, user-friendly, and fully compliant with applicable legal obligations.

Overview: Users have the right to transfer their personal data to other services and request permanent deletion of their data in a secure and verifiable manner.

13.1 Data Portability

You have the right to obtain and reuse your personal data across different services in a structured, commonly used, machine-readable format. This includes account information, transaction history, preferences, and content you have provided or interacted with.

How it works:

  • Submit a data portability request via info@nexly.eu or your account settings.
  • Nexly verifies your identity to prevent unauthorized access.
  • Once verified, data is provided in a structured format (e.g., JSON, CSV, XML) ready for secure transfer.
  • Requests are processed promptly within GDPR timeframes, typically within one month.

Safeguards: Data is transmitted securely using encryption. Only the requested categories are included, and sensitive data is handled with extra protection.

13.2 Right to Deletion (“Right to be Forgotten”)

You may request permanent deletion of your Nexly account and associated personal data, subject to legal, regulatory, or contractual obligations. Deletion is secure, verifiable, and ensures irrecoverable removal from active systems.

How it works:

  • Submit a deletion request via info@nexly.eu or your account management portal.
  • Identity verification prevents unauthorized deletions.
  • Personal data is securely erased from all active databases.
  • Where deletion is constrained by legal obligations (e.g., tax records), data is anonymized or retained for the minimum required duration.

Safeguards: Secure erasure techniques are applied. Backups containing your data are overwritten or securely isolated. Confirmation is provided once deletion is complete.

Summary: Nexly prioritizes user autonomy and privacy by providing transparent, secure, and GDPR-compliant processes for data portability and deletion. These measures strengthen user trust and empower users to control their personal data.

14. Cross-Border Data Transfers

Nexly operates in a global digital ecosystem, necessitating the secure transfer of personal data across international borders to support operational efficiency, service delivery, and innovation. Transfers outside the European Economic Area (EEA) are handled with strict safeguards, in line with GDPR requirements and recognized international privacy standards.

14.1 Legal Safeguards

All cross-border transfers are conducted under legal frameworks providing enforceable protections and accountability:

  • Standard Contractual Clauses (SCCs): Transfers to partners or processors outside the EEA use European Commission–approved SCCs. These ensure:
    • GDPR-equivalent obligations on recipients regarding confidentiality, security, and user rights.
    • Comprehensive assessment of recipient country’s legal environment to mitigate risks of inadequate protections.
    • Enforceable rights for individuals, allowing legal redress in case of misuse or non-compliance.
  • Binding Corporate Rules (BCRs): Intra-group transfers across Nexly subsidiaries rely on approved BCRs, ensuring:
    • Consistent privacy and security standards throughout the Nexly group globally.
    • Defined accountability measures, including audits, monitoring, and mandatory employee training.
    • Enforceable commitments to regulators and individuals, maintaining GDPR-level protections worldwide.
  • International Frameworks and Adequacy Decisions: Transfers to jurisdictions recognized by the European Commission (e.g., via adequacy decisions or frameworks like the EU–U.S. Data Privacy Framework) ensure legal protections equivalent to those in the EEA.

14.2 Technical and Organizational Safeguards

Nexly implements advanced measures to protect data during cross-border transfers:

  • Encryption and Pseudonymization: Data is encrypted in transit and at rest; pseudonymization limits exposure of identifiable information.
  • Access Controls: Role-based restrictions ensure only authorized personnel access transferred data.
  • Transfer Impact Assessments (TIAs): Each transfer is assessed for risk, and mitigation strategies are applied.
  • Compliance Monitoring: Continuous audits and contractual obligations ensure recipients follow Nexly’s data protection standards and the law.

14.3 User Rights and Transparency

Users are informed about cross-border transfers and retain full rights to access, correct, restrict, or object to the processing of their personal data. Detailed information regarding transferred data, purposes, and recipients is available in our privacy policy. Nexly commits to transparency, ensuring international transfers do not compromise user privacy.

Summary: Through legally binding agreements, internationally recognized frameworks, and rigorous technical and organizational safeguards, Nexly ensures cross-border data transfers maintain the highest standards of privacy, security, and compliance, safeguarding user trust and rights globally.

15. Automated Decision-Making and Profiling

Nexly utilizes automated decision-making, including profiling and algorithmic analysis, to optimize and personalize your experience across our platforms. These processes use data such as browsing behavior, purchase history, preferences, engagement patterns, and other relevant inputs to support decision-making, content delivery, and service enhancements.

15.1 Purpose of Automated Processing

Automated decision-making is employed for the following purposes:

  • Personalization: Tailoring recommendations, content, and services to your interests to enhance engagement and satisfaction.
  • Fraud Prevention and Security: Identifying unusual patterns and potential security threats in real-time to protect accounts and platform integrity.
  • Operational Optimization: Streamlining service delivery, improving platform performance, and enabling efficient allocation of resources.
  • Marketing and Advertising: Delivering relevant advertisements or offers while respecting your preferences and consent regarding targeting technologies.

15.2 How Automated Decisions Are Made

Automated processes rely on sophisticated algorithms and machine learning models. Key factors considered include:

  • Interaction history with Nexly platforms (clicks, views, engagement metrics).
  • Account activity and transaction history.
  • Demographic and preference data (provided directly or inferred with consent).
  • Device and technical signals (browser type, operating system, geolocation, etc.).

Algorithms are designed to be transparent, auditable, and periodically reviewed to ensure fairness, accuracy, and minimization of bias.

15.3 User Rights and Control

In accordance with GDPR and global privacy standards, you retain the right to:

  • Request meaningful information about the logic, significance, and expected consequences of automated decisions that affect you.
  • Contest decisions made solely on automated processing, including profiling, that produce legal or similarly significant effects on you.
  • Request human intervention to review, modify, or override automated decisions where appropriate.

Nexly ensures mechanisms for contesting automated decisions are accessible, responsive, and free of charge. Users may exercise these rights or seek clarification by contacting info@nexly.eu.

15.4 Safeguards and Oversight

Nexly implements multiple safeguards to protect users from adverse impacts of automated decision-making:

  • Regular algorithmic audits to assess fairness, accuracy, and bias mitigation.
  • Human oversight in critical processes to ensure decisions are reasonable and legally compliant.
  • Data minimization and purpose limitation, ensuring automated decisions use only relevant data.
  • Robust technical and organizational controls to maintain data security and confidentiality during profiling and automated processing.

Summary: Automated decision-making and profiling at Nexly are designed to enhance user experience, platform security, and operational efficiency, while respecting your rights, providing transparency, and maintaining robust oversight to prevent misuse or unfair outcomes.

16. Data Breach Notification

Nexly takes the security and integrity of your personal data extremely seriously. Despite robust technical, organizational, and procedural safeguards, no system is completely immune to breaches. In the unlikely event of a data breach affecting your personal data, we have established a comprehensive Incident Response and Breach Notification Framework to ensure timely, transparent, and legally compliant communication.

16.1 Detection and Containment

All Nexly systems are continuously monitored using advanced intrusion detection and security information tools. In the event of a suspected breach, we:

  • Immediately investigate the incident to determine the scope, severity, and potential impact on personal data.
  • Contain and mitigate the breach to prevent further unauthorized access or data exposure.
  • Engage internal security teams and, when necessary, external cybersecurity experts to remediate vulnerabilities.

16.2 Notification to Authorities

In accordance with GDPR and other applicable regulations, Nexly will notify the relevant Data Protection Authority (DPA) without undue delay and, where feasible, within 72 hours of becoming aware of the breach. Notifications include:

  • The nature and scope of the personal data affected.
  • Likely consequences and potential risks for individuals.
  • Measures taken or planned to mitigate the breach and prevent recurrence.

16.3 Notification to Affected Users

Where a breach is likely to result in a high risk to your rights and freedoms, Nexly will promptly notify affected individuals through one or more of the following channels:

  • Email communications to the registered address associated with your account.
  • Prominent notices posted on the Nexly website and relevant mobile apps.
  • Direct communication through phone, SMS, or other secure channels when necessary for urgent risk mitigation.

Notifications include clear guidance on steps you can take to protect yourself, such as changing passwords, monitoring accounts, and reviewing security practices.

16.4 Remediation and Follow-Up

Following a breach, Nexly undertakes a thorough post-incident review, which may include:

  • Strengthening technical defenses and updating security protocols.
  • Implementing additional organizational safeguards, employee training, and awareness programs.
  • Documenting lessons learned and revising incident response plans to enhance future preparedness.

16.5 Transparency and Accountability

Nexly is committed to full transparency in the event of a data breach. We maintain detailed records of all security incidents, actions taken, and communications with authorities and affected individuals. Our breach notification practices uphold trust, comply with legal obligations, and prioritize your safety and privacy.

Summary: Nexly’s data breach notification framework ensures timely detection, mitigation, and communication, maintaining the highest standards of transparency, accountability, and user protection in line with GDPR requirements.

17. Transparency in Algorithmic Processing

Nexly leverages algorithmic and automated processing including recommendation engines, machine learning models, and personalization algorithms to deliver enhanced, relevant, and context-aware user experiences. These algorithms analyze data such as browsing behavior, interaction patterns, historical preferences, and inferred interests to optimize content, product suggestions, and feature recommendations.

17.1 Algorithmic Transparency

We are committed to providing clear and understandable information about how our algorithms operate. This includes:

  • The types of data inputs used by algorithms (e.g., interaction history, device signals, demographic information, or aggregated behavioral data).
  • The primary purposes of algorithmic processing, such as content personalization, product recommendations, advertising relevance, and feature optimization.
  • Insights into general factors influencing outputs, including preference weighting, user engagement metrics, and contextual signals.

17.2 User Control and Customization

Nexly empowers users to manage the influence of algorithmic processing on their experience. Controls include:

  • Opting out of personalized recommendations or targeted advertising through account settings or preference panels.
  • Adjusting content preferences to prioritize or exclude certain topics, categories, or types of suggestions.
  • Accessing explanations about why particular content, products, or ads are presented, where feasible, in a user-friendly format.

17.3 Safeguards Against Bias and Unintended Effects

To ensure fairness, accuracy, and accountability, Nexly implements robust governance and technical safeguards, including:

  • Regular audits of algorithmic models for potential bias or discriminatory outcomes.
  • Testing and validation of recommendation and personalization engines to maintain relevance while avoiding harmful profiling.
  • Ongoing updates to models based on anonymized data, feedback mechanisms, and evolving regulatory standards.

17.4 Commitment to Responsible AI

Nexly is dedicated to maintaining a responsible, ethical, and transparent approach to AI and algorithmic processing. Our practices align with emerging international AI guidelines, data protection laws, and best practices, ensuring that automation enhances your experience without compromising your privacy or autonomy.

Summary: Nexly ensures transparency, user control, and ethical governance in all algorithmic processing, providing fair, explainable, and responsible AI-driven experiences while safeguarding privacy and autonomy.

18. Privacy Enhancing Technologies (PETs)

At Nexly, we integrate Privacy Enhancing Technologies (PETs) as a core element of our data protection, AI governance, and responsible innovation strategy. PETs allow us to provide personalized services and actionable insights while minimizing privacy risks, protecting sensitive data, and maintaining user trust. Our approach aligns with GDPR, the EU AI Act, and global standards for privacy-by-design and ethical AI deployment.

18.1 Differential Privacy

Differential privacy protects individual contributions within datasets while preserving the utility of aggregated insights. Key implementations include:

  • Adding statistically calibrated noise to datasets to prevent identification of any single user.
  • Applying differential privacy in AI model training to mitigate re-identification risks on sensitive training data.
  • Balancing privacy guarantees and analytical accuracy to ensure fair, robust, and actionable outputs.

This ensures analytics, reporting, and AI-driven insights remain valuable without exposing individual-level information.

18.2 Homomorphic Encryption

Nexly employs fully and partially homomorphic encryption to enable computation on encrypted data. Benefits include:

  • Data remains encrypted throughout its lifecycle including storage, transmission, and processing reducing exposure risks.
  • Analyses and AI operations are performed without decrypting sensitive data, minimizing attack surfaces.
  • Secure collaboration with third-party partners without sharing raw data, supporting privacy-preserving initiatives.

Homomorphic encryption allows us to extract value from sensitive datasets while maintaining strong confidentiality and regulatory compliance.

18.3 Zero-Knowledge Proofs (ZKPs)

ZKPs verify facts or attributes without revealing underlying personal data. Applications include:

  • Validating user eligibility or credentials (e.g., age verification, location validation) without exposing full personal details.
  • Providing verifiable compliance assurances, such as access permissions or consent status, without unnecessary data disclosure.
  • Enabling privacy-preserving authentication and authorization mechanisms that minimize exposure of sensitive information.

ZKPs empower users to assert claims privately while maintaining system integrity and regulatory compliance.

18.4 Secure Multi-Party Computation (SMPC)

SMPC allows multiple parties to jointly compute functions over combined datasets without revealing individual inputs. Key use cases include:

  • Cross-organizational collaboration while keeping sensitive datasets confidential.
  • AI model training with distributed datasets, enabling insights from diverse sources without centralizing personal data.
  • Compliance-critical environments, such as finance or healthcare, where strict data partitioning is legally required.

SMPC ensures collaborative computation occurs safely and securely, minimizing exposure risk while maintaining analytical value.

18.5 Commitment to Ethical and Responsible AI

By embedding differential privacy, homomorphic encryption, ZKPs, and SMPC into our practices, Nexly strengthens data protection, mitigates systemic risks, and enhances user trust. These technologies are proactive enablers of ethical, secure, and human-centered AI innovation. We continuously evaluate emerging PETs and integrate them into our data architecture to maintain cutting-edge privacy, security, and transparency standards.

Summary: Nexly uses PETs to uphold user privacy, ensure regulatory compliance, and deliver trustworthy, ethical, and secure AI-powered services.

19. Data Ethics and Responsible AI

At Nexly, we recognize that ethical data use and responsible deployment of Artificial Intelligence (AI) are essential for trust, safeguarding individual rights, and delivering societal value. Data ethics informs the design, development, deployment, and monitoring of AI systems, aligned with internationally recognized standards such as the EU AI Act, GDPR, OECD AI Principles, and ISO/IEC AI governance standards.

Nexly’s data ethics framework is structured around six core principles, each reinforced with operational measures:

19.1 Fairness

We ensure AI systems operate without discrimination and deliver equitable outcomes across all user groups. Key initiatives include:

  • Comprehensive bias detection and mitigation across data collection, feature selection, model training, and deployment.
  • Use of fairness metrics (e.g., disparate impact ratio, equalized odds, predictive parity) to evaluate outcomes across demographic segments.
  • Inclusion of diverse, representative datasets to reflect heterogeneous user populations and real-world scenarios.
  • Regular post-deployment revalidation of models to detect and correct emergent biases from evolving user behavior.

19.2 Transparency

Transparency underpins accountability, comprehension, and trust. We commit to:

  • Accessible explanations of AI processes, including data sources, model logic, and key decision factors.
  • Publishing AI impact assessments detailing scope, purpose, limitations, and potential risks.
  • Plain-language disclosures alongside technical documentation to ensure comprehension by all users.
  • Maintaining audit trails to support regulatory review, internal governance, and public accountability.

19.3 Privacy by Design

Privacy is embedded at every stage of AI and product development. Practices include:

  • Data minimization: Collecting and retaining only data strictly necessary for defined purposes.
  • Integration of Privacy Enhancing Technologies (PETs), including anonymization, pseudonymization, differential privacy, and secure computation.
  • Systematic Privacy Impact Assessments (PIAs) and risk modeling to anticipate privacy implications before deployment.
  • Continuous monitoring and updates to AI systems to mitigate privacy risks arising from model drift or new data sources.

19.4 Accountability and Governance

Responsibility for ethical AI is embedded across the organization. Measures include:

  • Clearly defined organizational roles, including DPOs, AI Ethics Officers, and cross-functional governance committees.
  • Mandatory internal audits and third-party reviews to validate compliance with legal, ethical, and technical standards.
  • Comprehensive employee training on ethical AI, data protection, and responsible innovation.
  • Documented escalation pathways for AI-related incidents to enable rapid response and mitigation.

19.5 User Control and Autonomy

Users are empowered to shape their AI-driven experience through granular control mechanisms:

  • Customizable privacy and personalization settings to adjust algorithmic influence on content or recommendations.
  • Opt-in, opt-out, or modification options for personalization features, ensuring meaningful choices on data usage.
  • Transparent explanations of trade-offs between relevance, personalization, and privacy for informed decision-making.

19.6 Ethical Advertising

Behavioral advertising is designed to respect user dignity and autonomy:

  • Complete transparency about why and how ads are targeted, including data sources and rationale.
  • Easy opt-out mechanisms for personalized advertising without affecting core platform functionality.
  • Internal guidelines preventing exploitative targeting, particularly for vulnerable populations.
  • Continuous monitoring of algorithms and ad delivery to detect and remediate unintended bias or discriminatory outcomes.

Summary: By embedding fairness, transparency, privacy by design, accountability, user autonomy, and ethical advertising into all AI and data practices, Nexly establishes a benchmark for responsible digital governance. These principles are continuously refined through stakeholder engagement, regulatory alignment, and iterative monitoring to ensure AI systems remain trustworthy, ethical, and socially beneficial.

20. Ethical AI Principles

Nexly is committed to the responsible design, deployment, and oversight of Artificial Intelligence (AI) in line with internationally recognized ethical frameworks. Our approach aligns with the EU AI Act and global guidance, including the OECD AI Principles and the Montreal Declaration for Responsible AI. Ethical AI at Nexly is not merely a compliance exercise it is a core governance principle embedded throughout the AI lifecycle.

Our ethical AI framework is structured around five foundational principles, operationalized through concrete measures and governance practices:

20.1 Fairness

  • AI systems are designed to avoid discrimination or bias against individuals or groups based on protected characteristics such as age, gender, race, ethnicity, religion, disability, or socio-economic status.
  • Bias audits and mitigation are conducted at multiple stages: data collection, model development, deployment, and post-deployment monitoring.
  • Diverse datasets are actively curated to ensure AI reflects broad, representative user populations.

20.2 Transparency

  • Clear communication about AI system operations, including the data used, decision-making logic, and influencing factors.
  • Accessible documentation, algorithmic impact assessments, and model explanations are published for regulators, stakeholders, and end users.
  • Plain-language summaries are provided to ensure transparency extends beyond technical audiences to all users.

20.3 Accountability

  • Defined roles and responsibilities for ethical AI governance, including AI Ethics Officers, Data Protection Officers, and cross-functional oversight committees.
  • Regular internal and independent audits to ensure compliance with legal, ethical, and technical standards.
  • Incident management processes and escalation pathways for AI-related risks, enabling rapid identification and remediation of unintended harms.

20.4 Privacy by Design

  • Privacy and data protection principles are integrated into all stages of AI development and deployment.
  • Data minimization and pseudonymization techniques limit personal data exposure while maintaining analytical utility.
  • Systematic Privacy Impact Assessments (PIAs) and ongoing risk evaluations anticipate and mitigate potential harms before AI systems go live.

20.5 User Control and Autonomy

  • Users have granular control over AI-driven experiences, including personalization, recommendation systems, and targeted content.
  • Opt-in, opt-out, and customization options ensure users can meaningfully shape their interactions with AI systems.
  • Users are informed about trade-offs between personalization, utility, and privacy to enable conscious decision-making.

Summary: By embedding these principles into our AI practices, Nexly seeks to balance innovation with responsibility, fostering trust, safeguarding fundamental rights, and ensuring that AI systems are ethical, transparent, and accountable. Ethical AI is a dynamic commitment at Nexly, continuously refined through stakeholder feedback, regulatory alignment, and advances in AI governance best practices.

21. Human Oversight

At Nexly, human oversight is a cornerstone of our AI governance strategy. All AI systems operate under meaningful, continuous, and accountable human supervision. This oversight ensures compliance with regulations such as GDPR and the EU AI Act, while safeguarding fairness, transparency, user rights, and ethical standards throughout the AI lifecycle. Oversight is conducted by a multidisciplinary team including AI ethicists, data protection officers, engineers, legal experts, and product managers.

Our human oversight framework is designed for proactive governance, with core responsibilities including:

  • Pre-Deployment Review and Ethical Approval: Each AI system undergoes a rigorous multi-tier review before deployment:
    • Assessment of training datasets for bias, representativeness, and legality.
    • Evaluation of models for fairness, robustness, and reliability.
    • Verification of privacy-preserving measures, including anonymization or differential privacy.
    • Formal approval only after compliance with internal ethical standards, privacy regulations, and technical safety requirements is demonstrated.
  • Continuous Monitoring and Performance Oversight: Deployed AI systems are actively monitored:
    • Real-time monitoring of outputs for accuracy, fairness, and consistency.
    • Detection of model drift, anomalies, or unintended consequences.
    • Regular audits to ensure alignment with ethical, legal, and operational standards.
  • Incident Response and Corrective Interventions: Human overseers address risks and anomalies:
    • Investigate errors, bias incidents, or system malfunctions.
    • Recalibrate or retrain models to resolve issues.
    • Temporarily suspend or withdraw AI systems if necessary.
  • Lifecycle Collaboration and Embedded Oversight: Oversight is integrated from design to decommissioning:
    • Collaboration with developers and data scientists during dataset curation and model training.
    • Ethical guidance on functionality, user experience, and risk mitigation at all stages.
    • Ongoing review of updates, features, or operational changes to maintain ethical compliance.
  • Documentation, Traceability, and Transparency: Full documentation ensures accountability:
    • Records of pre-deployment reviews, monitoring, and interventions.
    • Audit trails for internal governance and regulatory inquiries.
    • Clear reporting mechanisms for escalations, ethical concerns, and user complaints.
  • User Engagement and Oversight Feedback: User feedback informs oversight:
    • Channels for reporting unexpected AI behaviors or potential harms.
    • User-reported issues integrated into continuous improvement cycles.
    • Oversight decisions communicated clearly to maintain trust and accountability.

By combining multidisciplinary expertise, rigorous monitoring, proactive intervention, and transparent documentation, Nexly ensures human oversight is operational and meaningful. This approach guarantees ethical alignment, legal compliance, and accountability for AI systems.

21.1 Human-in-the-Loop (HITL)

For high-risk decisions, Nexly uses Human-in-the-Loop (HITL) systems to ensure humans actively validate, override, or reject AI outputs. This is critical when automated decisions could significantly affect rights, freedoms, or societal interests.

  • Personalized Recommendations: Human reviewers ensure AI-driven suggestions are accurate, fair, and unbiased.
  • Content Curation and Customization: Editors review algorithmically curated content to prevent harmful or misleading amplification.
  • Automated Moderation and Enforcement: Human review of moderation decisions protects freedom of expression and ensures context-sensitive outcomes.
  • High-Stakes Decision-Making: Humans retain final authority in critical domains like credit scoring, recruitment, healthcare, and legal analysis.

21.2 Human-on-the-Loop (HOTL)

For lower-risk or large-scale AI systems, Nexly employs Human-on-the-Loop (HOTL) oversight. Humans supervise system operations, monitor trends, review flagged anomalies, and can intervene swiftly if risks are detected, ensuring scalability with accountability.

21.3 Human-in-Command (HIC)

Nexly follows the Human-in-Command (HIC) principle: humans always retain ultimate responsibility for AI governance. AI augments human decision-making but never replaces human judgment, values, or accountability.

By embedding HITL, HOTL, and HIC frameworks, Nexly ensures oversight is robust, adaptable, and context-sensitive. This layered approach reinforces responsible AI, maintaining human direction and safeguarding human values.

22. Risk Assessment and Mitigation

Nexly conducts rigorous and comprehensive risk assessments for all AI systems, with heightened scrutiny for systems categorized as high-risk under the EU AI Act. These assessments proactively identify, evaluate, and mitigate potential risks impacting individuals, communities, or fundamental rights. Risk management is embedded throughout the AI lifecycle from conceptual design and development to deployment, continuous monitoring, and decommissioning to ensure responsible, ethical, and safe AI use.

Our risk assessment framework addresses both technical and societal dimensions, encompassing multiple axes of risk including fairness, privacy, security, and operational integrity. Key categories of risk and mitigation approaches include:

  • Bias and Discrimination: Assessing whether AI systems might produce biased or discriminatory outcomes:
    • Use of fairness metrics (e.g., disparate impact, equal opportunity, demographic parity) to quantify bias across groups.
    • Adversarial and scenario-based testing to identify potential discriminatory outcomes under diverse conditions.
    • Incorporation of diverse, representative training datasets and regular model retraining to reduce systemic bias over time.
  • Privacy Risks: Evaluating AI systems for risks to personal data or sensitive inferences:
    • Integration of privacy-by-design principles in system architecture.
    • Technical safeguards such as anonymization, pseudonymization, and differential privacy.
    • Secure data handling practices, encryption in transit and at rest, and consent management aligned with GDPR and other applicable laws.
  • Unintended Consequences: Analyzing direct and indirect effects of AI deployment:
    • Potential reinforcement of stereotypes or misinformation through algorithmic outputs.
    • Operational risks including unsafe or unreliable recommendations affecting user safety or decision-making.
    • Ethical risks, such as reduced human oversight, loss of agency, or opaque automated decisions impacting users.
  • Security Vulnerabilities: Evaluating potential threats to system integrity, confidentiality, and availability:
    • Testing for adversarial attacks, model inversion, and data poisoning.
    • Penetration testing and resilience assessments of AI pipelines.
    • Deployment of proactive defense mechanisms, anomaly detection, and incident response protocols.
  • Regulatory and Compliance Risks: Continuous monitoring for alignment with evolving laws, standards, and best practices, ensuring system design and processes maintain full compliance.

Risk mitigation is achieved through a combination of technical, organizational, and human-centric measures:

  • Technical Safeguards: Model validation, bias correction, security hardening, and privacy-enhancing technologies.
  • Organizational Controls: Defined policies, governance committees, and oversight bodies responsible for AI ethics, compliance, and operational integrity.
  • Human Oversight Mechanisms: Continuous monitoring, review boards, and escalation procedures to address anomalies, ethical concerns, and user-reported issues.

All identified risks, mitigation actions, and monitoring outcomes are thoroughly documented and reviewed regularly. This ensures adaptive risk management that evolves with technology, emerging threats, and regulatory changes. By embedding comprehensive risk assessment and mitigation processes, Nexly maintains AI systems that are safe, ethical, compliant, and trustworthy for users and stakeholders alike.

23. Data Quality and Bias Mitigation

High-quality, representative, and unbiased data is essential for developing AI systems that are ethical, reliable, and non-discriminatory. At Nexly, we implement a comprehensive and systematic framework for data governance, bias detection, and fairness assurance, aligning with GDPR principles, the EU AI Act, and global best practices in responsible AI. Our objective is to minimize harm, enhance inclusivity, and maintain trust in AI-driven outcomes.

  • Data Source Evaluation and Validation: Every dataset used in AI development is rigorously vetted for accuracy, completeness, provenance, timeliness, and representativeness. This includes:
    • Identifying and mitigating gaps or underrepresentation in the data.
    • Assessing potential sources of historical bias or skewed distributions.
    • Supplementing datasets with additional sources or synthetic data to ensure balanced representation across demographic and contextual variables.
  • Bias Detection and Quantification: Advanced analytical and statistical techniques are applied to detect and quantify potential biases:
    • Statistical parity, disparate impact, and equality of opportunity analysis across protected groups.
    • Counterfactual testing to examine whether changes in sensitive attributes alter model predictions.
    • Continuous use of fairness metrics throughout the AI lifecycle to monitor equity.
  • Bias Mitigation and Fairness Engineering: Nexly integrates fairness and equity directly into model design and training using techniques such as:
    • Algorithmic debiasing methods, including reweighting, resampling, and adversarial fairness approaches.
    • Fair representation learning to produce equitable outputs without compromising predictive performance.
    • Privacy-preserving methods (e.g., differential privacy) to protect individual data while maintaining fairness in aggregate outcomes.
  • Continuous Monitoring and Auditing: Post-deployment, AI systems undergo ongoing surveillance and structured audits to ensure high data quality and fairness:
    • Monitoring model performance for bias drift over time.
    • Periodic fairness audits and ethical reviews by multidisciplinary teams.
    • Transparent documentation of findings and prompt corrective actions when bias or unfair outcomes are detected.
  • Human Oversight and Accountability: Nexly ensures human accountability, particularly in high-stakes AI applications:
    • Review of automated decisions by experts in ethics, law, and domain-specific knowledge.
    • Investigation and resolution of anomalies, errors, or discriminatory behavior.
    • Maintaining audit trails for internal governance and external regulatory review.

By embedding rigorous data governance, proactive bias mitigation, continuous monitoring, and strong human oversight, Nexly ensures that its AI systems are trustworthy, transparent, and equitable. These practices minimize risks to individuals and society while fostering confidence in the ethical use of AI across our platform.

24. Explainability and Transparency

At Nexly, we recognize that explainability and transparency are essential for building trust, fostering accountability, and empowering users to make informed decisions regarding AI-driven interactions. We are committed to providing clear, accessible, and meaningful explanations of AI system behavior, outputs, and the factors influencing automated recommendations or decisions. Our approach ensures users understand not only the “what” but also the “why” behind AI-generated outcomes.

  • Transparent Communication of AI Decisions: We provide users with comprehensible explanations of how AI systems operate, including:
    • The specific data inputs and variables that influence predictions or recommendations.
    • The role of algorithmic models, rules, and heuristics in generating outcomes.
    • The level of confidence or uncertainty associated with predictions or automated decisions.
  • User-Centric Explainability: Explanations are designed to be accessible to a broad audience, avoiding technical jargon. Features include:
    • Visual aids and dashboards illustrating key decision factors and model reasoning.
    • Interactive tools allowing users to explore “what-if” scenarios and understand the impact of changing inputs.
    • Layered explanations, providing high-level summaries for general users and detailed technical insights for those seeking deeper understanding.
  • Algorithmic Accountability: Nexly documents and audits AI systems to ensure transparency is embedded internally as well as externally:
    • Maintaining detailed model documentation, including training data characteristics, assumptions, and limitations.
    • Conducting periodic audits for fairness, reliability, and adherence to ethical principles.
    • Providing traceable decision logs that allow internal and external reviewers to reconstruct reasoning behind AI outputs.
  • Empowering User Control: We enable users to exercise meaningful control over algorithmic interactions:
    • Opting in or out of personalized recommendations, content curation, and AI-driven suggestions.
    • Accessing detailed explanations for why specific recommendations or automated actions were presented.
    • Providing feedback to improve AI system accuracy, relevance, and fairness over time.
  • Ongoing Transparency Improvements: Nexly continually refines explainability frameworks in response to emerging standards, user feedback, and regulatory guidance, ensuring our transparency measures remain comprehensive, clear, and actionable.

By embedding robust explainability and transparency measures into our AI systems, Nexly ensures that users are informed, empowered, and confident in the fairness, reliability, and ethical grounding of all AI-driven interactions.

Nexly is committed to ensuring that users maintain full agency and oversight over how their data is collected, processed, and used in AI-driven systems. Obtaining explicit, informed, and freely given consent is a foundational principle, particularly for AI systems that may significantly affect user rights, preferences, or opportunities. Beyond consent, we empower users with granular controls that allow them to actively manage their interactions, privacy settings, and algorithmic experiences.

  • Explicit and Informed Consent: Users are provided with clear, accessible information about the types of AI processing performed, the data involved, and the potential impacts of automated decisions. Key features include:
    • Clear explanations of AI system purposes, intended outcomes, and associated risks.
    • Step-by-step consent flows for high-impact AI features, ensuring users can make deliberate and informed choices.
    • Options to modify or withdraw consent at any time without losing access to essential services.
  • Granular Control Over Data and AI-Driven Experiences: Nexly provides users with robust mechanisms to personalize their AI interactions and manage data usage, including:
    • Account-level privacy dashboards that allow users to enable or disable specific AI features, such as personalized recommendations, predictive insights, or content ranking.
    • Control over which data sources (browsing history, purchase history, or third-party integrations) are used for algorithmic personalization.
    • Real-time feedback mechanisms enabling users to refine AI outputs and improve system accuracy, relevance, and fairness.
  • Dynamic Consent and Preference Management: Consent is not a one-time event. Nexly ensures that user preferences are continuously respected and updated:
    • Periodic prompts for consent renewal, particularly when AI systems are updated, new features are introduced, or additional data processing is proposed.
    • Transparent notifications for changes in AI functionality, data use policies, or automated decision-making processes.
    • Audit logs that allow users to review and manage historical consent decisions and changes to personalization settings.
  • Respecting User Autonomy and Choice: Users have the right to opt out of specific AI-driven processing or automated decision-making without facing discrimination or degraded service quality. This includes:
    • Non-intrusive opt-out mechanisms for profiling, algorithmic recommendations, or behavioral advertising.
    • Ability to request explanations for algorithmic decisions, including how their consent affects outcomes.
    • Empowering users to make informed trade-offs between personalization, functionality, and privacy, ensuring meaningful control over their digital experiences.
  • Regulatory Compliance and Best Practices: All consent and control mechanisms align with GDPR, the EU AI Act, and other applicable data protection laws. Nexly actively monitors evolving regulatory guidance to ensure user autonomy, transparency, and accountability remain at the core of AI governance.

By integrating explicit consent, granular control, dynamic preference management, and robust user autonomy mechanisms, Nexly ensures that users retain meaningful oversight of their data and AI-driven experiences, fostering trust, transparency, and ethical innovation.

26. Data Protection and Privacy

Nexly is committed to safeguarding personal data and maintaining the highest standards of privacy protection across all operations. Compliance with the General Data Protection Regulation (GDPR), as well as other applicable global privacy frameworks, forms the foundation of our data governance approach. Our goal is to ensure that individuals’ rights are fully respected, data is handled securely, and privacy is embedded throughout the lifecycle of all services and AI systems.

  • Advanced Data Anonymization and Pseudonymization: Nexly employs state-of-the-art anonymization and pseudonymization techniques to ensure that personal identifiers are removed or masked in datasets used for analytics, AI training, and research. Key practices include:
    • Aggregation of data to prevent re-identification of individual users.
    • Application of differential privacy when analyzing behavioral or transactional data.
    • Strict separation of identifiable information from operational and research datasets to maintain confidentiality.
  • Robust Encryption Protocols: Personal data is secured with industry-leading encryption both in transit (TLS/SSL) and at rest (AES-256 or equivalent). Additional safeguards include:
    • Encrypted key management with limited access to authorized personnel.
    • Use of end-to-end encryption for sensitive communications and transactions.
    • Regular review and updating of cryptographic standards to counter emerging threats.
  • User-Centric Privacy Controls: Nexly empowers users with granular control over how their personal data is collected, processed, and shared. Features include:
    • Comprehensive privacy dashboards where users can review, update, or delete personal data.
    • Opt-in and opt-out mechanisms for data processing categories, including AI personalization, marketing, and analytics.
    • Real-time notifications when privacy settings are updated or when data is used for new purposes.
  • Data Minimization and Purpose Limitation: We rigorously apply the principles of data minimization, ensuring that only the minimum necessary data is collected and processed for clearly defined purposes. Measures include:
    • Regular audits to identify and eliminate unnecessary data collection.
    • Retention policies aligned with GDPR storage limitation requirements.
    • Segregation of data by purpose, ensuring that data collected for one function is not repurposed without explicit consent or legal basis.
  • Privacy Impact Assessments (PIAs) and Risk Monitoring: Nexly conducts comprehensive Privacy Impact Assessments for all high-risk or innovative processing activities. Key aspects include:
    • Identification and evaluation of potential privacy risks across technical, operational, and organizational domains.
    • Implementation of mitigation strategies, such as PETs, access controls, and anonymization, to reduce risks before deployment.
    • Ongoing monitoring and periodic reassessment to adapt to emerging threats, regulatory changes, and new AI capabilities.
  • Accountability and Compliance: Nexly maintains robust governance structures to ensure continuous compliance with data protection standards:
    • Appointment of a dedicated Data Protection Officer (DPO) responsible for overseeing GDPR and global privacy compliance.
    • Internal audits, risk assessments, and reporting mechanisms to track adherence to privacy policies and regulatory obligations.
    • Transparent communication channels enabling users to exercise their rights, lodge complaints, and seek clarifications regarding data protection practices.

By integrating advanced technical safeguards, comprehensive governance processes, and user-centric controls, Nexly ensures that personal data is protected with the highest standards of privacy, security, and ethical responsibility. This approach reinforces trust, accountability, and resilience in an increasingly complex digital ecosystem.

27. Accountability and Auditing

Nexly is committed to maintaining the highest standards of accountability to ensure full compliance with the EU AI Act, GDPR, and other applicable international data protection and AI regulations. Accountability is embedded throughout the AI lifecycle, from design and development to deployment, monitoring, and decommissioning. By systematically documenting AI systems, data usage, and decision-making processes, we ensure complete transparency, traceability, and verifiability of all AI-driven operations.

  • Comprehensive Record-Keeping and Documentation: Nexly maintains detailed and auditable records for each AI system, including:
    • Descriptions of system objectives, intended use cases, and functional specifications.
    • Data sources, preprocessing methods, and labeling procedures, including provenance and quality assessments.
    • Training methodologies, hyperparameters, model architectures, and evaluation metrics.
    • Decision logs, model outputs, and any automated actions taken to enable post-hoc review and verification.
    This thorough documentation ensures full traceability and supports transparency for regulators, internal governance, and end users.
  • Periodic Internal and External Audits: Nexly conducts structured audits to verify adherence to ethical, legal, and technical standards. These audits encompass:
    • Internal reviews led by cross-functional teams in AI ethics, legal, data protection, and engineering.
    • Independent external audits performed by accredited third parties to validate compliance and reinforce trust.
    • Assessment of privacy safeguards, security controls, bias mitigation measures, and overall AI system reliability.
    • Documentation of audit findings with actionable recommendations to continuously enhance compliance and system integrity.
  • Transparent Reporting and Incident Management: Nexly maintains clear, structured reporting procedures for all stakeholders, including:
    • Timely notification to regulatory authorities in line with GDPR and other applicable laws in the event of data breaches, non-compliance, or AI system anomalies.
    • Internal escalation protocols to ensure rapid investigation and resolution of compliance concerns.
    • Public disclosures or user notifications where transparency is required, supporting accountability and trust.
  • Governance Structures and Oversight: Accountability is reinforced through robust governance frameworks, which include:
    • Dedicated AI ethics and compliance committees responsible for reviewing system design, deployment, and monitoring outcomes.
    • Defined roles and responsibilities across product, engineering, legal, and privacy teams to ensure holistic oversight.
    • Integration of human-in-the-loop oversight for high-stakes AI decisions and automated processes affecting users’ rights.
  • Continuous Improvement and Risk Adaptation: Nexly proactively updates accountability practices in response to evolving legal, ethical, and technical requirements. Continuous improvement is achieved through:
    • Regular evaluation of audit procedures, risk assessments, and governance mechanisms.
    • Incorporation of emerging best practices, standards, and frameworks in AI ethics and privacy.
    • Iterative enhancements to record-keeping, monitoring, and transparency measures based on audit outcomes and stakeholder feedback.

By embedding meticulous documentation, rigorous audits, transparent reporting, and robust governance into our AI and data operations, Nexly ensures that accountability is demonstrable, ethical standards are upheld, and compliance with global regulatory frameworks is consistently maintained. These measures reinforce trust, protect individual rights, and support responsible AI innovation at scale.

28. Prohibited Practices

Nexly is firmly committed to the responsible development and deployment of Artificial Intelligence (AI) and strictly adheres to the prohibitions outlined in the EU AI Act. We recognize that certain AI applications carry significant risks to fundamental rights, personal freedoms, and societal well-being. To uphold ethical standards, regulatory compliance, and public trust, the following practices are strictly prohibited across all Nexly operations:

  • Social Scoring: Nexly does not develop or deploy AI systems that assign social scores, behavioral ratings, or reputational rankings to individuals based on their characteristics, activities, or personal data. Social scoring can undermine autonomy, exacerbate social inequalities, restrict access to essential services, and result in discriminatory outcomes. This prohibition encompasses all automated or semi-automated scoring mechanisms that may influence decisions affecting employment, financial access, housing, or civic participation.
  • Indiscriminate or Mass Surveillance: Nexly categorically avoids AI-driven mass surveillance or monitoring practices that collect data from individuals without their explicit, informed consent or a clearly defined legal and societal justification. AI systems are designed with privacy, proportionality, and transparency in mind. Surveillance is strictly limited to narrowly defined purposes, compliant with applicable laws, and subject to robust human oversight to prevent misuse or overreach.
  • Manipulation or Exploitation of Vulnerable Individuals: Nexly prohibits the use of AI to manipulate, exploit, or target vulnerable populations, including but not limited to children, people with disabilities, or individuals in sensitive contexts. Systems are developed to prioritize:
    • Safety and protection from harm, ensuring users cannot be coerced or manipulated.
    • Equity and fairness, avoiding exploitative targeting based on susceptibility or vulnerability.
    • Human dignity, ensuring AI interventions respect the autonomy and rights of all individuals.
  • Other High-Risk or Prohibited Activities: Beyond the above, Nexly refrains from any AI applications explicitly forbidden by law or that could reasonably be expected to cause systemic harm, including:
    • AI-enabled deception or fraud that could mislead users.
    • Automated decision-making without human oversight in contexts with significant legal or personal consequences.
    • Use of personal or sensitive data in a manner that violates consent, privacy, or ethical standards.

By enforcing these prohibitions, Nexly ensures that all AI systems operate within a framework of ethical responsibility, legal compliance, and respect for human rights. These safeguards foster trust, reinforce accountability, and contribute to the creation of AI technologies that are safe, equitable, and beneficial for society.

29. Notification of Authorities

Nexly is committed to full transparency and regulatory compliance in the management of AI systems. In alignment with the EU AI Act, GDPR, and other applicable legal frameworks, Nexly maintains clear protocols for promptly notifying competent authorities in the event of any significant incidents, malfunctions, or breaches involving AI systems that could impact the rights, safety, or well-being of individuals or society.

Our approach to notification is guided by the following principles:

  • Timeliness: All relevant authorities are notified without undue delay following the detection of incidents, in accordance with statutory timelines and risk assessments. Rapid notification ensures that regulators can take appropriate oversight or mitigation measures.
  • Scope and Severity Assessment: Before notification, incidents are assessed to determine potential impacts on fundamental rights, user safety, and societal well-being. This ensures that authorities receive accurate, context-rich, and actionable information.
  • Comprehensive Reporting: Notifications include detailed technical descriptions of the incident, affected AI systems, root cause analyses, risk assessments, and proposed mitigation strategies. Where applicable, updates on corrective actions and preventive measures are provided to regulators until the issue is resolved.
  • Coordination with Internal Oversight: The notification process is overseen by Nexly’s AI Governance, Data Protection, and Risk Management teams, ensuring alignment with internal policies, audit trails, and ethical standards. Human oversight is maintained throughout the incident response lifecycle.
  • Regulatory Collaboration: Nexly proactively cooperates with authorities, providing additional data, clarifications, or system access as required. This ensures that regulators can fully assess the risks, validate corrective measures, and provide guidance on preventing recurrence.

By implementing robust, timely, and transparent notification protocols, Nexly ensures that all AI-related incidents are managed responsibly, authorities are kept fully informed, and the rights, safety, and trust of users and society are safeguarded.

30. Training and Certification

Nexly is committed to fostering a culture of responsibility, accountability, and expertise in AI development and deployment. All personnel involved in the design, development, deployment, monitoring, and maintenance of AI systems are required to receive comprehensive training and, where applicable, formal certification to ensure adherence to ethical, legal, and technical standards.

Our training and certification framework encompasses the following key elements:

  • Ethical AI and Data Ethics Training: Staff undergo mandatory training on ethical principles, including fairness, transparency, privacy, human rights, and societal impact. Training emphasizes the operationalization of ethics in AI design, bias mitigation, and responsible algorithmic decision-making.
  • Regulatory and Legal Compliance: Personnel are educated on GDPR, the EU AI Act, and other relevant international and local regulations. This ensures that data handling, AI operations, and reporting practices are fully compliant with statutory obligations and regulatory guidance.
  • Technical Best Practices: Training covers secure software development, AI model lifecycle management, data governance, privacy-enhancing technologies (PETs), risk assessment, explainability, and audit procedures. Employees are equipped to design, implement, and monitor AI systems that meet high standards of robustness, reliability, and security.
  • Certification and Continuous Professional Development: Where appropriate, personnel obtain recognized certifications in AI ethics, cybersecurity, data protection, and regulatory compliance. Ongoing professional development ensures that staff stay current with emerging technologies, standards, and global best practices.
  • Role-Specific Training: Training is tailored according to job function and level of AI system interaction. For example, developers receive in-depth technical instruction on bias mitigation and model robustness, while governance and oversight teams focus on auditing, compliance, and incident response procedures.

By implementing structured, comprehensive, and role-specific training and certification programs, Nexly ensures that all personnel possess the knowledge, skills, and accountability necessary to responsibly manage AI systems, mitigate risks, and uphold the highest standards of ethical, legal, and societal compliance.

31. Continuous Monitoring and Improvement

Nexly is committed to maintaining the highest standards of safety, ethics, and compliance throughout the lifecycle of its AI systems. Continuous monitoring and improvement are core components of our AI governance framework, ensuring that systems remain aligned with the EU AI Act, GDPR, and evolving industry best practices.

Our continuous monitoring and improvement strategy encompasses the following elements:

  • Real-Time System Monitoring: AI systems are continuously monitored for performance, reliability, fairness, and compliance. Metrics such as accuracy, error rates, bias indicators, and anomalous behaviors are tracked in real time to detect deviations from expected behavior.
  • Ongoing Risk Assessment: Risk evaluations are performed on a recurring basis to identify new or emerging threats, including technical vulnerabilities, algorithmic drift, or potential ethical concerns. Risk mitigation strategies are updated proactively to address these evolving risks.
  • Automated and Human Oversight: Monitoring combines automated alerting systems with human oversight. Human reviewers validate critical decisions, investigate anomalies, and ensure that outputs remain consistent with ethical, legal, and societal standards.
  • Continuous Model Evaluation and Updating: AI models are regularly evaluated against fairness, robustness, and explainability benchmarks. When necessary, models are retrained, recalibrated, or redesigned to maintain high-quality performance and reduce bias or unintended consequences.
  • Feedback Integration: Insights from user feedback, incident reports, and stakeholder reviews are systematically incorporated into system improvements. This ensures that AI systems adapt to real-world usage and evolving user expectations while mitigating potential harms.
  • Regulatory Alignment and Best Practice Integration: Continuous monitoring ensures ongoing alignment with legal obligations and emerging standards in AI governance. Our teams actively track regulatory updates, academic research, and industry guidance to implement best practices and enhance compliance.
  • Transparent Reporting and Documentation: All monitoring activities, assessments, and improvements are documented comprehensively. Transparent reporting enables accountability, facilitates audits, and provides a clear trail of governance actions for internal and external stakeholders.

By embedding continuous monitoring, evaluation, and iterative improvement into our AI lifecycle, Nexly ensures that its AI systems remain responsible, trustworthy, and resilient, while upholding ethical standards, regulatory compliance, and user trust.

Nexly provides users with a comprehensive, intuitive Consent Management Dashboard designed to maximize transparency, control, and user autonomy over personal data and AI-driven experiences. This dashboard empowers users to manage their privacy preferences efficiently and in real time, reflecting our commitment to privacy-by-design and regulatory compliance under GDPR and related frameworks.

Key features and capabilities of the Consent Management Dashboard include:

  • Granular Consent Controls: Users can grant, withdraw, or modify consent for specific categories of data processing, including account management, marketing communications, personalization, behavioral analytics, and AI-driven recommendations. This allows fine-tuned control rather than an all-or-nothing approach.
  • Real-Time Updates: Changes to consent preferences take immediate effect across all Nexly platforms and services, ensuring that user choices are respected without delay.
  • Transparency and Detailed Explanations: The dashboard provides clear, plain-language explanations for each type of data processing, the purposes involved, and the associated risks and benefits, enabling informed decision-making.
  • Audit Trail and History: Users can view a history of their consent decisions and any changes made over time, ensuring transparency and traceability. This feature supports accountability and aligns with GDPR requirements for demonstrable consent.
  • Opt-Out and Restriction Options: Beyond granting or withdrawing consent, users can selectively restrict certain data uses such as profiling or targeted advertising while maintaining essential service functionality.
  • Integration with AI and Personalization Settings: Consent choices directly influence AI-driven services, such as recommendations, content customization, and algorithmic personalization, giving users control over the use of their data in automated decision-making.
  • Regulatory Compliance and Continuous Improvement: The dashboard is designed to comply with GDPR, the EU AI Act, and other emerging privacy regulations. Nexly continuously monitors user feedback and regulatory developments to enhance dashboard functionality and user experience.

By providing a powerful, user-centric Consent Management Dashboard, Nexly ensures that individuals maintain meaningful control over their data and AI interactions, fostering trust, transparency, and compliance in all aspects of digital engagement.

33. Privacy by Design

At Nexly, Privacy by Design (PbD) is a core principle embedded across the entire lifecycle of our products and services. We proactively integrate privacy and data protection measures from the earliest stages of product conception, through design, development, deployment, and maintenance, ensuring that personal data is protected by default and by architecture not as an afterthought.

Our Privacy by Design approach encompasses the following key dimensions:

  • Proactive Risk Assessment: Privacy and security risks are identified and mitigated before any product or feature is launched. This includes conducting Data Protection Impact Assessments (DPIAs) for high-risk data processing activities to anticipate potential harms and ensure regulatory compliance.
  • Data Minimization and Purpose Limitation: We collect and process only the personal data strictly necessary for the intended functionality. Systems are designed to avoid unnecessary data retention and to limit access based on user roles and operational needs.
  • Built-in Security and Encryption: Security controls are integrated at the design phase, including strong encryption for data at rest and in transit, secure authentication mechanisms, and rigorous access management. These safeguards ensure confidentiality, integrity, and resilience against cyber threats.
  • Default Privacy Settings: All products and services are configured to prioritize privacy by default. Users retain control over their data through opt-in settings, granular consent mechanisms, and configurable personalization options.
  • Transparency and User Awareness: Privacy considerations are communicated clearly to users through accessible notices, dashboards, and educational prompts. Users are empowered to make informed choices regarding how their data is collected, processed, and shared.
  • Ongoing Monitoring and Iterative Improvement: Privacy protections are continuously assessed and updated as products evolve. This includes monitoring emerging threats, regulatory changes, and user feedback to enhance privacy controls and maintain compliance with GDPR, the EU AI Act, and other applicable frameworks.
  • Cross-Functional Collaboration: Privacy by Design is reinforced through collaboration among product designers, engineers, legal teams, and data protection officers. Ethical, legal, and technical perspectives are integrated into every stage of development to ensure holistic privacy governance.

By embedding Privacy by Design at every layer of product development, Nexly ensures that data protection is not only a compliance requirement but also a fundamental feature of our services, protecting user privacy, fostering trust, and setting a high standard for responsible digital innovation.

34. User Education and Awareness

At Nexly, we recognize that user empowerment is a cornerstone of effective data protection. Beyond compliance, we are committed to fostering a culture of privacy literacy and digital awareness, equipping users with the knowledge and tools to make informed choices about their personal data and online interactions.

Our user education and awareness program includes the following pillars:

  • Comprehensive Privacy Resources: We maintain an extensive, easily accessible repository of educational materials, including guides, tutorials, and FAQs, that explain core data protection principles, user rights under GDPR and the EU AI Act, and practical strategies for safeguarding personal information.
  • Interactive Tools and Tutorials: Users can access interactive modules, walkthroughs, and privacy wizards that guide them step-by-step through managing account settings, configuring consent preferences, and understanding the implications of AI-driven personalization and data sharing.
  • Data Protection Tips and Best Practices: We provide actionable recommendations for secure online behavior, such as creating strong passwords, enabling multi-factor authentication, recognizing phishing attempts, and understanding the impact of cookies, trackers, and third-party integrations.
  • Awareness of AI and Automated Processing: Users are informed about algorithmic decision-making, profiling, and personalization features. We provide clear explanations of how AI systems operate, what data is used, and how users can control or opt out of automated processing.
  • Targeted Communications and Updates: We proactively notify users of important privacy updates, new features, or changes to data processing practices through newsletters, in-app notifications, and other channels, fostering continuous awareness and engagement.
  • Community Engagement and Feedback: Users are encouraged to participate in webinars, workshops, and surveys on privacy and security topics. Feedback mechanisms enable Nexly to refine educational content and address emerging concerns or knowledge gaps effectively.

By combining accessible resources, interactive learning, clear communication, and community engagement, Nexly ensures that users are not only informed but actively empowered to take control of their personal data and make safe, confident decisions online. This approach strengthens trust, promotes responsible digital behavior, and aligns with Nexly’s overarching commitment to ethical and transparent data practices.

35. Ethical Advertising Practices

Nexly is committed to ensuring that all advertising and promotional activities conducted on our platform adhere to the highest ethical standards. When behavioral or personalized advertising is employed, we prioritize transparency, user autonomy, fairness, and the protection of personal data. Our approach ensures that advertising enhances the user experience without compromising trust or privacy.

Key components of Nexly’s ethical advertising framework include:

  • User Control and Consent: Users are provided with granular control over their advertising preferences through account settings and the Consent Management Dashboard. Options include opting in or out of personalized ads, managing cookie and tracking preferences, and specifying categories of interest for ad personalization.
  • Transparency in Ad Delivery: We clearly explain how ads are selected and delivered, including the types of data used, the logic behind targeting decisions, and the role of third-party advertising partners. Users can access this information in plain language, promoting informed decision-making.
  • Non-Discrimination and Fair Targeting: Our advertising practices avoid targeting based on sensitive personal characteristics such as race, ethnicity, religion, sexual orientation, health status, or political beliefs. We apply fairness checks and ethical oversight to prevent discriminatory or exploitative ad delivery.
  • Data Minimization and Privacy Protection: Only the minimum necessary data is used for advertising purposes. Data is pseudonymized, aggregated, or anonymized wherever possible, and all processing complies with GDPR and applicable privacy regulations.
  • Third-Party Oversight: All advertising partners and platforms are rigorously vetted and contractually obligated to comply with Nexly’s ethical standards, data protection requirements, and legal obligations. Regular audits and assessments ensure ongoing compliance.
  • Clear Opt-Out Mechanisms: Users may easily opt out of personalized or interest-based advertising at any time via account settings, cookie controls, or external opt-out tools such as the AdChoices framework. Opting out does not affect access to core platform functionalities.
  • Continuous Monitoring and Accountability: Nexly regularly reviews advertising practices to ensure they remain ethical, privacy-respecting, and aligned with evolving regulations. Feedback mechanisms allow users to report concerns, which are addressed promptly by dedicated oversight teams.

Through these measures, Nexly ensures that advertising serves as a responsible, transparent, and user-centric component of the platform experience, reinforcing trust while respecting user rights, preferences, and privacy.

36. Use of Personal Data for AI Model Training

At Nexly, we leverage personal data to train and optimize artificial intelligence (AI) models with the goal of enhancing the responsiveness, accuracy, and personalization of our platform’s services. Our AI-driven features are designed to deliver relevant recommendations, personalized content, and optimized interactions while maintaining rigorous privacy, security, and ethical standards throughout the process.

Our approach is guided by core principles to ensure responsible use of personal data:

  • Purpose Limitation: Personal data is used exclusively for clearly defined objectives, such as improving recommendation engines, personalizing content, refining search functionality, and optimizing user interactions. We do not repurpose data for unrelated uses without explicit user consent.
  • Data Minimization and Aggregation: Only the minimum necessary data is collected and processed. Wherever feasible, data is anonymized, pseudonymized, or aggregated to prevent re-identification while enabling meaningful AI training.
  • Privacy-Preserving Techniques: We deploy advanced privacy-enhancing technologies (PETs), including differential privacy, secure multi-party computation, and homomorphic encryption, ensuring sensitive information remains protected while allowing AI models to learn effectively.
  • Bias and Fairness Mitigation: All datasets are carefully evaluated to reduce the risk of algorithmic bias. Continuous monitoring, auditing, and fairness testing are conducted to ensure equitable outcomes for all users, regardless of demographic or sensitive attributes.
  • Transparency and User Control: Users are informed about how their data may be used for AI model training and retain control over their participation. Through the Consent Management Dashboard and account settings, users can opt in or out of AI-driven personalization features at any time.
  • Security and Compliance: AI training is conducted in secure environments with strict access controls, encryption, and audit logging. All processes comply with GDPR, the EU AI Act, and other applicable data protection regulations.
  • Ongoing Oversight and Improvement: AI models are continuously monitored and updated to improve accuracy, fairness, and ethical alignment. User feedback and oversight mechanisms allow for prompt intervention if anomalies or concerns arise.

36.1 Why We Use Your Personal Data for AI Model Training

The primary purpose of using personal data for AI training is to enhance the user experience on our platform. By analyzing user behaviors, interactions, and preferences, we can develop AI-driven services that better align with your expectations, offering more relevant content, personalized recommendations, and improved user interactions.

36.2 Ensuring Security and Compliance

All personal data used for AI model training is handled with strict security measures and privacy safeguards, protecting it from unauthorized access, disclosure, or misuse. Our processes comply with all applicable data protection laws, ensuring your rights are respected throughout.

36.3 Commitment to Ethical Data Use

Nexly is dedicated to the responsible and ethical use of personal data for AI training. Practices are rooted in fairness, transparency, and respect for user privacy. We proactively identify and mitigate biases in AI models to ensure data is used responsibly and equitably.

36.4 User Control Over Data Usage

Users retain meaningful control over the use of their data for AI model training. Through the Consent Management Dashboard, you can customize privacy settings, manage preferences, and opt in or out of specific AI-driven personalization features, ensuring your data is used in alignment with your comfort level.

36.5 Data Anonymization and Pseudonymization

To safeguard privacy, personal data is anonymized or pseudonymized before use in AI training. Anonymization removes identifiable information, while pseudonymization replaces it with unique identifiers. These measures prevent re-identification while enabling meaningful model training.

Nexly strictly adheres to GDPR, the EU AI Act, and other relevant regulations when using personal data for AI training. User privacy rights are always respected, and practices are continuously aligned with legal requirements.

36.7 Regular Audits and Data Retention

We conduct regular audits to verify compliance with our privacy, security, and ethical standards. Personal data used for AI training is retained only as long as necessary for the intended purpose and securely deleted once no longer required.

For inquiries regarding personal data usage for AI model training or to exercise your privacy rights, please contact info@nexly.eu. Nexly is committed to providing a safe, ethical, and transparent digital experience while upholding the highest standards of privacy and trust.

37. Third-Party Audits and Certification

Nexly is committed to the highest standards of transparency, accountability, and trust in its data protection and AI practices. To validate compliance and reinforce confidence among users, regulators, and stakeholders, we subject our operations to rigorous third-party audits and certifications conducted by independent, accredited organizations.

Our third-party audit and certification framework is designed to ensure comprehensive oversight across all aspects of data governance and AI deployment:

  • Independent Verification: Accredited auditors assess Nexly’s data processing practices, AI systems, and security controls against international legal and ethical standards, including GDPR, the EU AI Act, APEC Cross-Border Privacy Rules (CBPRs), ISO 27001, and other relevant frameworks.
  • Scope of Audits: Audits cover organizational procedures, technical infrastructure, AI model development, training data usage, privacy-enhancing technologies (PETs), consent management processes, and ongoing risk mitigation practices.
  • Certification Programs: Nexly participates in recognized global certification schemes to formally demonstrate adherence to data protection, cybersecurity, and responsible AI governance standards. Certifications are renewed periodically to reflect evolving regulations and best practices.
  • Continuous Monitoring and Improvement: Audit findings inform our continuous improvement processes, including remediation plans, policy updates, and enhancements to AI governance, risk management, and operational procedures.
  • Transparency Reporting: Audit summaries and certification results are published or made available to regulators, partners, and users where appropriate, reinforcing accountability and trust while demonstrating our commitment to ethical and lawful data handling.

By combining independent verification, rigorous certification, and continuous improvement, Nexly ensures that its data protection and AI practices meet or exceed global standards, fostering a secure, ethical, and accountable digital environment for all stakeholders.

38. Privacy Impact Assessments (PIAs)

At Nexly, Privacy Impact Assessments (PIAs) are a cornerstone of our data protection and governance framework. PIAs provide a structured, systematic approach to identify, evaluate, and mitigate potential privacy risks arising from data processing activities. They are an essential tool for ensuring compliance with global data protection regulations, safeguarding individuals’ personal information, and embedding privacy by design into all aspects of our operations, particularly for high-risk or innovative processing initiatives.

Our PIAs are designed to achieve the following objectives:

  • Comprehensive Risk Identification: We conduct thorough analyses of all data collection, storage, processing, and sharing practices to identify potential privacy risks. This includes assessing risks of unauthorized access, data breaches, inadvertent disclosures, profiling, algorithmic bias, and misuse of sensitive information, as well as emerging risks introduced by novel technologies or cross-border data flows.
  • Structured Risk Evaluation and Prioritization: Identified risks are assessed based on their likelihood and potential impact. High-priority risks are flagged for immediate mitigation. The evaluation considers legal, ethical, and operational dimensions, ensuring a holistic view of privacy implications across the organization.
  • Targeted Mitigation Strategies: For each identified risk, we design and implement tailored mitigation measures. These may include:
    • Data minimization to reduce unnecessary exposure of personal information.
    • Pseudonymization and anonymization to protect user identities.
    • Advanced encryption and secure storage solutions.
    • Robust access controls, audit logging, and monitoring systems.
    • Privacy-enhancing technologies (PETs) integrated into AI and data workflows.
  • Ongoing Monitoring, Review, and Adaptation: PIAs are treated as a living process rather than a one-time exercise. We continuously monitor implemented safeguards, conduct periodic reviews, and update our assessments to reflect regulatory changes, evolving technologies, and operational adjustments. This ensures that privacy protections remain effective and resilient over time.
  • Stakeholder Engagement and Documentation: All PIAs are meticulously documented, including risk analyses, mitigation measures, and review outcomes. Findings are shared with relevant internal stakeholders and, when appropriate, with regulators, to ensure accountability, traceability, and transparency.

By embedding rigorous PIAs into our operational and technological practices, Nexly proactively identifies, mitigates, and monitors privacy risks. This approach strengthens user trust, supports regulatory compliance, and reinforces our commitment to ethical, secure, and responsible data management.

39. Regular Transparency Reports

Nexly is committed to fostering trust and accountability by regularly publishing detailed transparency reports. These reports provide clear insights into how we handle data requests, law enforcement inquiries, and other governmental or regulatory interactions that may impact user data or privacy. Our goal is to maintain a high level of transparency while safeguarding the confidentiality and security of our users.

Key aspects of our transparency reporting framework include:

  • Disclosure of Government and Legal Requests: Transparency reports include aggregated and anonymized information regarding requests from government agencies, law enforcement, or judicial authorities. We disclose the type, scope, and frequency of requests while respecting legal constraints that prevent the disclosure of specific cases.
  • Documented Responses: For each category of request, we outline Nexly’s response, including whether data was disclosed, partially provided, or challenged legally. This demonstrates our commitment to upholding user privacy and only complying with lawful, proportionate requests.
  • Contextual Analysis: Reports provide contextual explanations to help users understand the nature of requests, the applicable legal frameworks, and the safeguards in place to protect their data. This includes any measures taken to minimize data disclosure or contest requests deemed excessive or overbroad.
  • Frequency and Accessibility: Nexly publishes transparency reports at least biannually, ensuring timely insights into our data handling practices. Reports are made publicly accessible in a clear, user-friendly format on our website.
  • Commitment to Continuous Improvement: We regularly evaluate and enhance our transparency reporting practices based on user feedback, evolving legal standards, and best practices in corporate governance and privacy accountability.

By issuing regular transparency reports, Nexly reinforces its dedication to accountability, user trust, and ethical stewardship of personal data. These reports empower users, regulators, and stakeholders to understand how data is managed, while affirming our unwavering commitment to privacy and responsible governance.

40. Accessibility and Multilingual Support

Nexly is deeply committed to accessibility, inclusivity, and global user engagement. We ensure that our privacy policies, terms of service, and platform interfaces are accessible to all users, including individuals with disabilities, and available in multiple languages to serve our diverse international audience.

Our accessibility and multilingual support framework includes the following elements:

  • Compliance with Accessibility Standards: Our privacy policy and platform adhere rigorously to the Web Content Accessibility Guidelines (WCAG) 2.1 at Level AA or higher. This includes providing screen reader compatibility, keyboard navigation support, sufficient color contrast, scalable text, and alternative text for images. These measures ensure that users with visual, auditory, motor, or cognitive impairments can access and understand critical privacy information without barriers.
  • Multilingual Availability: Recognizing our global user base, Nexly provides translations of our privacy policy and key platform documentation in multiple languages. This enables users worldwide to access information in their preferred language, ensuring clarity, comprehension, and meaningful consent.
  • Continuous Accessibility Testing: Accessibility is not static. We regularly conduct internal audits and usability testing with diverse user groups, including individuals with disabilities, to identify and remediate barriers. Feedback is actively incorporated to improve navigability, readability, and comprehension.
  • Inclusive Design Principles: Accessibility considerations are integrated into product development from the outset. Privacy notices, consent interfaces, and AI-driven features are designed to be inclusive, intuitive, and user-friendly for all, reducing cognitive load and ensuring equitable access to information.
  • User Support and Feedback: Users encountering accessibility or language challenges can contact our dedicated support team, which provides assistance and guidance to ensure that all users can exercise their privacy rights fully and independently.

By embedding accessibility and multilingual support into our privacy practices and platform design, Nexly ensures that all users regardless of ability, language, or location can access, understand, and exercise their privacy rights with confidence. This commitment reinforces our core values of inclusivity, transparency, and user empowerment.

At Nexly, we prioritize your privacy and are committed to transparency in our use of cookies and other tracking technologies. These tools help us enhance your browsing experience, understand website traffic, optimize site functionality, and deliver personalized content and advertising responsibly.

Before any non-essential cookies or tracking mechanisms are deployed, we request your explicit consent. By clicking "Accept", or continuing to use our website without adjusting your settings, you are providing informed consent to our use of these technologies as described in our Cookie Policy.

Key principles of our cookies and tracking practices include:

  • Granular Consent: Users are offered fine-grained control over cookies and tracking technologies. Consent is categorized by purpose such as necessary, performance, functionality, and advertising allowing you to enable or disable specific types according to your preferences.
  • Consent Revocation and Management: You can withdraw or modify your consent at any time through our Consent Management Dashboard. Changes are applied immediately, ensuring your preferences are respected across all platform interactions.
  • Transparency and Clarity: Our Cookie Policy clearly explains the types of cookies we use, the data collected, their purpose, duration, and any third parties involved. This ensures that you can make fully informed decisions regarding your online privacy.
  • Privacy by Default: Only essential cookies necessary for core website functionality are enabled by default. Non-essential cookies remain inactive until explicit consent is granted, in line with regulatory requirements such as GDPR and ePrivacy directives.
  • Regular Review and Compliance: We conduct periodic reviews of our cookie and tracking practices to ensure ongoing compliance with global privacy regulations. We also update our mechanisms and policies to reflect evolving standards, emerging best practices, and user expectations.
  • Security and Data Protection: All data collected via cookies and tracking technologies are stored and processed securely. Access is strictly controlled, and privacy-enhancing technologies are applied where possible to prevent misuse or unauthorized disclosure.

By implementing these measures, Nexly ensures that users have meaningful control over their online privacy, fully understand the implications of cookie use, and can engage with our platform confidently and securely.

42. Children's Privacy

At Nexly, safeguarding the privacy and safety of children is a top priority. Our services are expressly designed for users aged 16 and above. We do not knowingly collect, process, or store personal data from individuals under the age of 16. Any data inadvertently submitted by a child will be promptly addressed in accordance with strict privacy and legal standards.

Key principles of our children’s privacy practices include:

  • Age-Appropriate Access: Our platforms and services include mechanisms to help prevent use by children under 16. Where feasible, age verification or parental consent measures are implemented to ensure compliance with applicable regulations, including the GDPR and other child protection laws.
  • Prohibition of Targeted Advertising: We do not serve targeted advertising to children. Any content or marketing communications are strictly limited to general, age-appropriate materials without using personal data to profile or influence minors.
  • Parental and Guardian Intervention: Parents or legal guardians who believe that their child has provided personal data to Nexly can contact us at info@nexly.eu. We will verify the request, promptly remove the child’s data, and provide confirmation of deletion, ensuring full compliance with legal obligations.
  • Minimization and Safety by Design: All processes involving potential interaction with minors follow privacy-by-design principles, ensuring minimal data collection, strong access controls, and secure handling to protect children from inadvertent exposure or misuse of personal information.
  • Ongoing Monitoring and Compliance: We regularly review our services, content, and data collection practices to prevent unauthorized access or processing of children’s data, aligning with global best practices and regulatory guidance on child privacy.

By embedding these safeguards, Nexly ensures a safe digital environment for minors, upholds the highest standards of children’s privacy protection, and maintains compliance with international data protection regulations and ethical standards.

43. Data Security Measures

At Nexly, protecting personal data is a fundamental priority. We employ a comprehensive, multi-layered approach to data security, combining technical, administrative, and organizational safeguards to ensure the confidentiality, integrity, and availability of personal information throughout its lifecycle. Our security strategy is designed to meet or exceed industry standards and regulatory requirements, including GDPR and other relevant privacy laws.

  • Advanced Encryption: All personal data is protected using state-of-the-art encryption protocols, both in transit and at rest. This includes Transport Layer Security (TLS) for network communications and AES-256 encryption for data storage. Encryption ensures that information remains unreadable to unauthorized parties and maintains its integrity across all systems.
  • Granular Access Controls: Access to personal data is restricted strictly to personnel who require it for legitimate business purposes. We implement role-based access controls (RBAC), multi-factor authentication (MFA), and regular access audits to enforce strict authorization policies and prevent unauthorized data access or modifications.
  • Network and Endpoint Protection: Our infrastructure is safeguarded by enterprise-grade firewalls, intrusion detection and prevention systems (IDPS), and continuous network monitoring. These systems identify and mitigate threats such as malware, ransomware, and unauthorized intrusion attempts in real time.
  • Continuous Security Monitoring and Testing: Nexly conducts ongoing vulnerability assessments, penetration testing, and threat simulations to proactively identify and remediate potential weaknesses. Security measures are continuously updated to address evolving cyber threats and to align with best practices in cybersecurity.
  • Data Backup and Disaster Recovery: We maintain encrypted backups across multiple geographically distributed data centers to ensure business continuity and data resilience. Comprehensive disaster recovery plans are tested regularly to enable rapid restoration of services and protection of critical information in case of system failures or cyber incidents.
  • Employee Security Training: All staff with access to personal data undergo mandatory, recurring cybersecurity and privacy training. This ensures that personnel are equipped to recognize potential threats, follow secure handling procedures, and maintain compliance with security policies and legal requirements.
  • Incident Detection and Response: Nexly has a robust incident response framework that facilitates rapid detection, containment, investigation, and remediation of security events. In the unlikely event of a breach, affected individuals and regulatory authorities are promptly notified in accordance with applicable legal obligations, and corrective actions are taken to prevent recurrence.
  • Third-Party Security Oversight: All vendors and partners with access to personal data are subject to stringent security requirements, including contractually mandated data protection standards, regular security audits, and compliance verification. This ensures that external collaborators uphold Nexly’s high security standards.

By integrating these advanced technical, administrative, and organizational measures, Nexly delivers a comprehensive, multi-layered defense strategy. This approach safeguards personal information against evolving threats, enhances resilience, and fosters trust while maintaining strict compliance with international data protection and cybersecurity standards.

44. Data Retention Justifications

At Nexly, we carefully manage the retention of personal data to ensure it is stored only for as long as necessary to fulfill the specific purposes for which it was collected, in accordance with applicable data protection laws, including GDPR. Retention periods are determined based on the type of data, its intended use, and legal, regulatory, or operational requirements. Our goal is to balance business needs, regulatory obligations, and user privacy while minimizing the risks associated with unnecessary data storage.

  • Purpose Limitation: Personal data is retained strictly for the purposes outlined in this Privacy Policy, such as account management, service delivery, AI model training (where applicable), and customer support. Data is not kept for unrelated purposes without obtaining explicit consent.
  • Legal and Regulatory Compliance: Certain types of personal data are retained for longer periods to comply with legal obligations, such as tax, financial, or corporate record-keeping requirements. For instance, order and transaction records may be retained for up to 5–7 years depending on jurisdictional mandates to support audits, regulatory reporting, or dispute resolution.
  • Operational Necessity: Data required for ongoing business operations such as account history, support tickets, and preference settings is retained only for as long as necessary to provide high-quality services, troubleshoot issues, and optimize user experience.
  • Risk-Based Retention Review: We conduct regular reviews of stored data to assess whether retention remains necessary. Unnecessary data is securely deleted or anonymized to mitigate privacy risks and reduce the volume of retained information.
  • Data Minimization and Anonymization: When retention of identifiable data is no longer required, Nexly applies anonymization or pseudonymization techniques to preserve aggregate insights or statistical utility without retaining personally identifiable information.
  • Transparency and User Control: Users are informed about our retention practices and periods for different types of personal data. Where applicable, users can request early deletion or modification of their data, subject to operational or legal constraints.

By applying these principles, Nexly ensures that personal data is retained responsibly, securely, and in compliance with legal obligations, while minimizing privacy risks and fostering trust. Our retention policies are regularly reviewed and updated in line with evolving regulatory requirements, industry best practices, and technological advancements.

45. Algorithmic Impact Assessment (AIA)

Nexly conducts comprehensive Algorithmic Impact Assessments (AIAs) for AI systems that may pose significant risks to individuals’ rights, safety, or broader societal interests. AIAs are integral to our governance framework, providing a structured, evidence-based approach to identify, evaluate, and mitigate potential harms associated with algorithmic decision-making. This process ensures ethical, transparent, and accountable AI deployment while fostering trust with users and stakeholders.

Structured AIA Process

  • Risk Identification: We systematically analyze AI systems to identify potential risks across multiple dimensions, including:
    • Bias and discrimination affecting protected characteristics (e.g., race, gender, age, disability).
    • Privacy violations, data misuse, or leakage of sensitive information.
    • Societal and operational impacts, including unintended consequences, amplification of social biases, or disruption to public trust.
  • Risk Evaluation: Each identified risk is evaluated based on likelihood, severity, and scope of potential impact. This assessment informs prioritization and determines the appropriate mitigation strategies necessary to reduce or eliminate harm.
  • Mitigation Strategy Development: We design and implement mitigation measures tailored to specific risks, which may include:
    • Data Anonymization and Pseudonymization: Removing or masking personally identifiable information to reduce privacy exposure while maintaining analytic utility.
    • Algorithmic Fairness Techniques: Integrating fairness constraints, bias-correction algorithms, and equitable model evaluation to ensure fair treatment of all user groups.
    • Human Oversight Mechanisms: Embedding human review processes to monitor high-stakes or automated decisions and intervene when necessary.
    • Transparency and Explainability: Producing clear documentation and user-facing explanations of AI system design, functionality, and decision-making criteria.
  • Monitoring, Review, and Continuous Improvement: AI systems are continuously monitored to evaluate performance, detect emergent risks, and ensure ongoing compliance with ethical, legal, and regulatory standards. Periodic reassessments and audits refine mitigation strategies and align AI behavior with evolving societal and technical expectations.

Evaluation Criteria for AI Systems

  • Data Quality and Bias Assessment: Evaluating dataset representativeness, accuracy, completeness, and potential for systemic or historical bias.
  • Algorithmic Transparency: Ensuring decision-making processes are interpretable, explainable, and accessible to both internal stakeholders and end-users.
  • Fairness and Non-Discrimination: Implementing tests and safeguards to prevent discriminatory outcomes, promote equity, and uphold human rights.
  • Privacy Impact: Analyzing the potential effect on user privacy and integrating robust technical and organizational controls to safeguard personal data.
  • Social and Societal Impact: Considering broader implications, including potential for misuse, unintended reinforcement of social biases, or other negative societal outcomes.

By conducting rigorous AIAs, Nexly ensures that our AI systems operate responsibly, ethically, and transparently. This structured approach allows us to mitigate risks to individuals and society while maximizing the benefits of AI innovation, reinforcing trust and accountability across all aspects of our platform.

46. Human-in-the-Loop (HITL) Systems

At Nexly, we integrate Human-in-the-Loop (HITL) mechanisms into AI systems wherever human judgment is essential to safeguard individual rights, ethical principles, and legal compliance. HITL ensures that automated decision-making is complemented by human oversight, enabling intervention, review, or override of AI-driven actions, particularly in high-stakes or sensitive scenarios. This approach reinforces accountability, mitigates risks, and promotes trust in our AI technologies.

Core Principles of Human-in-the-Loop Implementation

  • Critical Decision Oversight: HITL mechanisms are prioritized for decisions with significant impact on individuals, such as eligibility determinations, content moderation, automated recommendations affecting personal opportunities, or actions that may influence user rights.
  • Real-Time Human Intervention: Systems are designed to allow authorized personnel to review AI outputs in real time, assess potential risks or errors, and override automated decisions when necessary. This ensures that human judgment supplements algorithmic outputs.
  • Accountability and Traceability: All HITL interactions are logged and auditable, providing a transparent record of human interventions, review outcomes, and rationale for overrides. This supports compliance with regulatory requirements and ethical standards.
  • Human Expertise Integration: Oversight personnel are trained in AI ethics, privacy regulations, fairness principles, and domain-specific knowledge relevant to the AI system’s context. This ensures that interventions are informed, consistent, and aligned with best practices.
  • Continuous Feedback and Improvement: Insights from human interventions are used to refine AI models, update decision-making rules, and reduce future errors. This creates a feedback loop that enhances model accuracy, fairness, and reliability over time.
  • Risk-Based Application: HITL is deployed strategically based on risk assessments, ensuring human oversight is focused on processes where errors could cause material harm, legal violations, or ethical concerns.

By embedding HITL mechanisms, Nexly ensures that AI systems operate responsibly and ethically, balancing automation with human judgment. This approach protects individual rights, enhances accountability, and aligns AI-driven processes with regulatory requirements and societal expectations.

47. Accessibility in AI Systems

At Nexly, we are committed to designing and developing AI systems that are accessible, inclusive, and usable by all individuals, including those with disabilities. Accessibility is a core component of our AI development lifecycle, ensuring that everyone, regardless of ability, can interact effectively with our technologies. By embedding accessibility principles from the outset, we strive to eliminate barriers and foster equitable digital experiences.

Key Principles of Accessibility in AI Systems

  • Inclusive Design: AI systems are built using inclusive design principles to accommodate a wide range of abilities, including visual, auditory, cognitive, and motor impairments. Accessibility considerations are integrated into UI/UX design, AI outputs, and interactive features.
  • Compatibility with Assistive Technologies: Our AI systems are tested and optimized to work seamlessly with assistive technologies, such as screen readers, speech recognition software, alternative input devices, and other accessibility tools. This ensures that users relying on assistive technology can fully access AI-driven functionality.
  • Accessible Content Generation: AI-generated content, recommendations, and outputs are designed to be understandable and usable for diverse audiences. This includes providing alternative text for images, captioning for audio/video outputs, clear language explanations, and accessible formats for visualizations.
  • User-Centered Accessibility Testing: Accessibility evaluations are conducted throughout the AI lifecycle, including design, development, deployment, and post-launch monitoring. User feedback from individuals with disabilities is incorporated to continuously improve accessibility and usability.
  • Regulatory and Standards Compliance: Nexly ensures adherence to relevant accessibility regulations and standards, such as the Web Content Accessibility Guidelines (WCAG) 2.1 and applicable local laws. Compliance audits and periodic reviews are conducted to maintain and enhance accessibility commitments.
  • Training and Awareness: Developers, product managers, and AI teams receive training on accessibility best practices, assistive technology compatibility, and inclusive design principles to embed accessibility in every aspect of AI development.

By integrating these accessibility principles, Nexly ensures that AI systems are equitable, usable, and empowering for all users. Our commitment to accessibility is an ongoing effort, with continuous improvements informed by user feedback, technological advances, and evolving accessibility standards.

48. Procurement Requirements for AI Systems

Nexly is committed to ensuring that all AI systems integrated into our platform whether developed internally or procured from third-party vendors adhere to the highest standards of ethical AI, data protection, and regulatory compliance. Our procurement framework establishes rigorous requirements to evaluate, select, and monitor third-party AI solutions before integration into Nexly’s infrastructure.

Key Principles for AI Procurement

  • Ethical and Regulatory Compliance: All third-party AI systems must comply with ethical AI principles, including fairness, transparency, accountability, and privacy by design. Compliance with the EU AI Act, GDPR, and other applicable local and international regulations is mandatory.
  • Vendor Due Diligence: Nexly conducts thorough due diligence on AI vendors prior to procurement. This includes evaluating vendor policies, governance structures, past performance, security protocols, data handling practices, and commitment to ethical AI standards. Vendors must demonstrate adherence to rigorous testing, auditing, and documentation practices.
  • Data Protection and Privacy Safeguards: AI systems must implement robust technical and organizational measures to safeguard personal data, including encryption, access controls, anonymization, and adherence to privacy-enhancing technologies (PETs). Data minimization and purpose limitation principles are strictly enforced.
  • Risk Assessment and Mitigation: Before integration, AI systems are assessed for potential risks, including algorithmic bias, security vulnerabilities, and unintended societal impacts. Identified risks must be mitigated through contractual obligations, technical safeguards, or operational controls.
  • Transparency and Documentation: Vendors are required to provide detailed documentation on AI system functionality, training data, model evaluation processes, and decision-making mechanisms. This ensures traceability and accountability, enabling internal audits and external regulatory oversight.
  • Continuous Monitoring and Compliance: Procurement does not end at acquisition. AI systems are continuously monitored for ethical compliance, performance, and security. Vendors are contractually obligated to report incidents, maintain regular audits, and update systems in accordance with evolving regulations and Nexly’s ethical standards.
  • Ethical Innovation Alignment: Procured AI systems must align with Nexly’s commitment to responsible AI innovation. Systems are evaluated for societal impact, inclusivity, accessibility, and alignment with user-centric principles, ensuring that technology serves both business goals and user trust.

By implementing these rigorous procurement requirements, Nexly ensures that third-party AI systems meet our ethical, legal, and technical standards. This proactive approach mitigates risks, fosters transparency, and strengthens trust with users, stakeholders, and regulatory authorities, while enabling the safe and responsible deployment of AI solutions across our platform.

49. Collaboration with Regulatory Authorities

Nexly is committed to fostering a cooperative and transparent relationship with regulatory authorities, data protection agencies, and other relevant stakeholders to promote the responsible development, deployment, and oversight of AI systems. Regulatory engagement is approached not merely as compliance, but as a strategic partnership to advance ethical AI practices, safeguard fundamental rights, and contribute to the evolving regulatory landscape.

Core Principles of Regulatory Collaboration

  • Proactive Engagement: Nexly maintains open lines of communication with national and international regulators. We proactively seek guidance on emerging AI technologies, compliance expectations, and best practices for governance, risk mitigation, and ethical AI deployment.
  • Participation in Industry Working Groups: We actively contribute to cross-industry forums, standardization initiatives, and public-private working groups. This collaboration ensures that Nexly’s practices align with sectoral standards and evolving regulatory frameworks, while sharing insights to help shape policy and standards for responsible AI.
  • Transparent Reporting and Disclosure: Nexly provides regulators with timely and accurate information regarding our AI systems, data processing activities, algorithmic risk assessments, and ethical compliance measures. Transparency includes documentation of model design, training datasets, mitigation strategies, and audit outcomes.
  • Compliance Advisory and Guidance: Regulatory collaboration informs our internal policies and procedures, ensuring that our AI governance framework meets or exceeds the requirements of the EU AI Act, GDPR, and other applicable laws. Guidance from regulators supports continuous improvement and alignment with emerging standards.
  • Incident Coordination: In the event of AI system malfunctions, breaches, or high-risk incidents, Nexly promptly coordinates with relevant authorities to provide full disclosure, facilitate investigations, and implement recommended corrective actions.
  • Global Regulatory Alignment: As a global platform, Nexly engages with international regulators and standard-setting bodies to harmonize ethical AI practices across jurisdictions, ensuring consistency in privacy, fairness, security, and accountability standards.

By maintaining a structured, transparent, and proactive approach to regulatory collaboration, Nexly ensures that its AI systems operate responsibly, comply with current and emerging regulations, and contribute to broader industry efforts to advance trustworthy, human-centric AI. This approach reinforces legal compliance while building trust with users, regulators, and the wider public.

50. Public Transparency Reports on AI Systems

Nexly is committed to fostering public trust through comprehensive and proactive disclosure of information about our AI systems. We publish regular transparency reports that provide detailed insights into the design, functionality, governance, and societal impact of our AI technologies. These reports promote accountability, facilitate stakeholder understanding, and demonstrate our adherence to ethical and regulatory standards.

Key Elements of Our AI Transparency Reports

  • AI System Overview: Each report provides clear descriptions of AI systems in operation, including objectives, scope, functionalities, and the types of data processed. Where applicable, the report explains how AI systems influence user experiences and decision-making processes.
  • Algorithmic Impact and Risk Assessment: We disclose findings from Algorithmic Impact Assessments (AIAs), highlighting potential risks to individuals, such as bias, discrimination, or privacy concerns, along with the mitigation measures implemented to address them.
  • Fairness and Bias Audits: Transparency reports include summaries of fairness and bias testing outcomes, demonstrating how Nexly ensures equitable treatment of users and minimizes unintended discriminatory effects in AI-driven decisions.
  • Data Governance and Privacy Protections: Reports outline measures taken to protect personal data used in AI systems, including anonymization, pseudonymization, encryption, and privacy-preserving techniques.
  • User Control and Engagement: Information is provided on mechanisms that empower users to manage their interactions with AI systems, including consent management, personalization controls, and opt-out options.
  • Regulatory Compliance and Third-Party Audits: Reports detail Nexly’s adherence to frameworks such as the EU AI Act, GDPR, and other relevant standards. They also summarize third-party audit results and certifications verifying compliance and best practices.
  • Societal Impact and Use Cases: Reports provide insights into how AI systems affect individuals, communities, and broader societal outcomes, highlighting both positive contributions and lessons learned from challenges or unintended consequences.

By publishing these transparency reports, Nexly reinforces accountability, encourages public dialogue, and demonstrates our commitment to ethical, responsible, and human-centric AI. Reports are publicly accessible to regulators, users, researchers, and other stakeholders to ensure that our AI practices are open, understandable, and continuously improving.

51. Ethical Review Board

Nexly has established an Ethical Review Board (ERB) composed of leading experts in AI ethics, data protection, human rights, law, and related fields. The ERB provides independent oversight, guidance, and approval of all AI systems to ensure alignment with the EU AI Act, GDPR, and global best practices. Serving as a central governance mechanism, the ERB upholds high ethical standards across the AI lifecycle from design and development to deployment and monitoring.

Core Functions of the Ethical Review Board

  • Review and Approval of AI Systems: Conduct comprehensive evaluations of all AI systems, assessing ethical implications, societal impact, regulatory compliance, and technical safety before deployment.
  • Guidance on Ethical AI Practices: Provide expert advice on responsible AI development, operational practices, and governance frameworks, ensuring adherence to principles such as fairness, transparency, accountability, and human-centric design.
  • Continuous Monitoring and Oversight: Track AI system performance, evaluate real-world impacts, identify emerging risks, and recommend corrective or mitigating actions where necessary.
  • Independent Assessment: Operate independently from operational and product teams to ensure unbiased oversight and safeguard against conflicts of interest, enhancing credibility and stakeholder trust.
  • Policy Development and Advisory: Contribute to the creation and refinement of internal ethical policies, standards, and protocols for AI governance, aligning Nexly’s practices with evolving legal, societal, and technological norms.

Criteria for Selecting Board Members

  • Expertise: Members are selected based on demonstrated expertise in AI ethics, data privacy, human rights, legal frameworks, or technology governance.
  • Diversity of Perspectives: The board prioritizes multidisciplinary and multicultural representation to ensure a wide range of ethical, legal, societal, and technical perspectives.
  • Independence: Members operate independently of Nexly’s operational teams and maintain a clear separation from commercial or development pressures, mitigating conflicts of interest.

Process for Reviewing AI Systems

  • Initial Assessment: Examine the AI system’s design, intended purpose, underlying datasets, technical architecture, and risk profile.
  • Ethical Analysis: Evaluate compliance with ethical principles including fairness, transparency, accountability, human oversight, and societal impact.
  • Risk Assessment and Mitigation: Identify potential harms to individuals, communities, or fundamental rights and review mitigation strategies such as bias reduction, privacy-preserving methods, and human-in-the-loop safeguards.
  • Recommendations and Approval: Provide actionable guidance for improvements, modifications, or additional safeguards. Final approval for AI deployment is granted only when ethical, legal, and societal considerations are fully addressed.
  • Ongoing Review: Post-deployment monitoring ensures that AI systems continue to meet ethical standards and comply with evolving regulations and societal expectations.

By integrating independent ethical oversight through the ERB, Nexly ensures that AI systems operate responsibly, minimize risks to individuals and society, and foster trust among users, regulators, and stakeholders. The ERB exemplifies our commitment to embedding ethics, accountability, and transparency into every stage of AI development and deployment.

52. Continuous Ethical AI Training

At Nexly, we are deeply committed to cultivating a culture of ethical AI development and responsible deployment. Continuous education and structured training programs ensure that all personnel involved in AI engineers, data scientists, product managers, and operational teams are equipped to make informed, ethical decisions throughout the AI lifecycle. Our initiatives embed ethical awareness, regulatory compliance, and user-centric considerations into every stage of AI system design, deployment, and monitoring.

Core Initiatives for Ethical AI Training

  • Mandatory Ethical AI Training: All employees involved in AI development or oversight undergo comprehensive training programs covering:
    • Ethical AI Frameworks: Principles including fairness, accountability, transparency, privacy, safety, and non-discrimination.
    • Bias Detection and Mitigation: Techniques for identifying, measuring, and addressing biases in datasets and algorithmic models.
    • Data Privacy and Security: Best practices for safeguarding personal data and ensuring compliance with GDPR, the EU AI Act, and other regulations.
    • Explainability and Transparency: Ensuring AI outputs are interpretable, traceable, and communicable to both internal stakeholders and end users.
    • Responsible AI Use Cases: Evaluating potential ethical risks, societal impacts, and unintended consequences of AI applications.
  • Ongoing Professional Development: Nexly fosters continuous learning and growth through:
    • Internal Workshops and Seminars: Led by internal ethics experts and external thought leaders on emerging AI trends, governance frameworks, and regulatory updates.
    • External Training and Certification: Encouraging employees to pursue certifications in responsible AI, data ethics, and privacy-compliant AI practices.
    • Ethical AI Advisory Boards: Engaging with internal and external experts who provide guidance, independent assessments, and oversight of AI initiatives.
  • Embedding Ethics in Product Lifecycle: Ethical considerations are integrated into every stage of product development:
    • Design Thinking Workshops: Incorporating ethical AI principles during ideation and system design phases.
    • Code Reviews and Algorithmic Audits: Systematic reviews to detect ethical risks, privacy vulnerabilities, and potential biases.
    • User Feedback and Evaluation: Collecting and analyzing feedback to measure the real-world ethical impact of AI systems and continuously improve outcomes.
  • Continuous Monitoring and Program Evaluation: Training programs are assessed and iteratively improved based on:
    • Employee Feedback: Regular surveys and evaluations to enhance relevance and effectiveness of training modules.
    • Industry Best Practices: Alignment with evolving standards in ethical AI, human-centric design, and regulatory guidance.
    • Research and Technological Advances: Incorporation of new findings in AI ethics, fairness, explainability, and privacy-preserving techniques.

Through these initiatives, Nexly ensures that ethical principles are embedded into both organizational culture and operational practices. Continuous ethical AI training strengthens trust with users, mitigates risks associated with AI deployment, and supports the creation of a responsible, transparent, and sustainable AI ecosystem.

53. Community Engagement and Feedback

Nexly is committed to fostering meaningful engagement with the broader community to ensure that its AI systems are socially responsible, user-centric, and aligned with public interest. By actively soliciting feedback from users, stakeholders, advocacy groups, and subject matter experts, Nexly continuously improves AI governance, identifies potential risks, and aligns its technologies with societal values and ethical norms.

Key Principles of Community Engagement

  • Accessible Feedback Channels: We provide multiple, user-friendly channels for individuals and organizations to submit input, raise concerns, or share experiences related to AI systems. These channels are designed to be accessible to all users, including those with disabilities or limited technical expertise.
    • Feedback portals within user accounts and applications
    • Dedicated email and support lines for AI-related inquiries
    • Surveys, focus groups, and community forums to gather structured insights
  • Stakeholder Collaboration: Nexly proactively engages with a wide range of stakeholders, including academic researchers, industry experts, civil society organizations, and policymakers, to inform AI system design, governance, and ethical standards. This collaboration ensures diverse perspectives are considered and integrated into decision-making.
  • Public Consultation and Transparency: We participate in public discussions, workshops, and consultations to share insights about AI deployments and gather community feedback on societal impact. Transparency about AI use, limitations, and outcomes strengthens trust and accountability.
  • Feedback Integration into AI Governance: Insights gathered from the community are systematically incorporated into AI governance processes, including:
    • Algorithmic audits and impact assessments
    • Bias detection and mitigation strategies
    • Ethical Review Board evaluations and recommendations
    This ensures that community input actively shapes AI policies, ethical guidelines, and system improvements.
  • Continuous Improvement: Community engagement is treated as an ongoing process rather than a one-time activity. Regular reviews of feedback inform updates to AI systems, governance practices, and user communication strategies, fostering an iterative approach to responsible AI development.

By embedding robust community engagement and feedback mechanisms, Nexly ensures that its AI systems are developed and deployed responsibly, with accountability to both users and society at large. This approach promotes inclusivity, transparency, and alignment with evolving ethical and regulatory standards, reinforcing trust in Nexly’s AI ecosystem.

54. Glossary of Terms

  • Data Controller: The legal entity that determines the purposes, means, and objectives of processing personal data. The Data Controller is responsible for ensuring compliance with data protection laws and safeguarding individuals’ privacy rights.
  • Data Processor: An organization or service provider that processes personal data on behalf of the Data Controller. Data Processors must follow the Controller’s instructions and implement appropriate technical and organizational measures to protect data.
  • Personal Data: Any information relating to an identified or identifiable natural person (“data subject”), including identifiers such as names, email addresses, location data, or online identifiers.
  • GDPR (General Data Protection Regulation): A European Union regulation governing the collection, processing, storage, and transfer of personal data. GDPR emphasizes transparency, accountability, data minimization, and protection of individuals’ rights.
  • EU AI Act: A European Union regulatory framework that sets requirements for the development, deployment, and use of AI systems, focusing on high-risk applications, transparency, accountability, and human oversight.
  • Artificial Intelligence (AI): Technology that simulates human cognitive functions such as learning, reasoning, problem-solving, perception, and language understanding through computational models.
  • AI System: A computer-based system that leverages AI techniques to perform tasks that typically require human intelligence, including decision-making, pattern recognition, prediction, and automation of processes.
  • Algorithm: A defined set of rules or instructions that a computer system follows to analyze data, solve problems, or complete tasks in a structured and reproducible manner.
  • Automated Decision-Making: Decisions made by AI or algorithmic systems without human intervention. Such decisions can have legal or significant effects on individuals and are subject to ethical and regulatory oversight.
  • Profiling: The automated processing of personal data to evaluate, predict, or analyze aspects of an individual’s behavior, preferences, or characteristics, often used to personalize services or identify risks.
  • Data Minimization: A core privacy principle that restricts the collection and retention of personal data to what is strictly necessary to achieve a specific purpose or objective.
  • Data Anonymization: The process of removing or modifying personal identifiers in a dataset so that individuals cannot reasonably be re-identified, enabling safe use of data for analytics or AI training.
  • Data Pseudonymization: The replacement of personal identifiers with unique codes or pseudonyms to reduce the risk of identification while allowing certain data analysis and processing activities.
  • Privacy by Design: An approach that embeds privacy and data protection measures into the design, development, and lifecycle of products, systems, and services, ensuring that user privacy is a foundational consideration.
  • Consent Management Dashboard: A user-facing interface that allows individuals to view, manage, and modify their consent preferences regarding the collection, processing, and sharing of personal data.
  • Cookie: A small text file stored on a user’s device by a website to retain information about the user, such as preferences, session identifiers, or tracking data. Cookies can be essential, functional, or used for analytics and advertising purposes.
  • High-Risk AI System: AI applications that may significantly impact the rights, safety, or freedoms of individuals, and which are subject to stricter regulatory, ethical, and monitoring requirements under the EU AI Act.
  • Fairness Metrics: Quantitative measures used to evaluate whether an AI system produces equitable outcomes across diverse groups, helping to detect and mitigate bias or discrimination.
  • Human-in-the-Loop: A design approach where humans actively monitor, review, or override AI-driven decisions to ensure accountability, ethical compliance, and mitigation of potential harm.

Nexly is committed to maintaining transparency and clarity in how we manage personal data. This glossary will evolve alongside updates to our Privacy Policy, emerging technologies, and regulatory requirements. Please revisit this page periodically to stay informed about our data handling practices and terminology.

55. Contact

For any questions, concerns, or requests related to this Privacy Policy, data protection, or the processing of your personal information, please contact Nexly’s Data Protection Officer (DPO) or the designated Data Controller at info@nexly.eu.

Our team is committed to responding promptly and transparently to all inquiries, including but not limited to:

  • Requests to access, rectify, or erase your personal data.
  • Questions regarding consent preferences, data portability, or restrictions on processing.
  • Concerns about data security, privacy practices, or potential breaches.
  • Clarifications on the use of AI systems, algorithmic decision-making, or automated profiling.

We prioritize your privacy and are dedicated to providing clear guidance and support in accordance with applicable laws, including the GDPR, EU AI Act, and other relevant regulations. Our team ensures that all requests are handled efficiently, securely, and in a manner that respects your rights and expectations.

For urgent matters or regulatory inquiries, our DPO is available to liaise directly with supervisory authorities to ensure compliance and resolution in accordance with legal obligations.