Toan walked into the office at 8:47 AM on a Tuesday morning holding a stack of printouts. Not an email forwarded to Slack. Not a link in a thread. Physical paper, printed from his home printer, with entire paragraphs highlighted in yellow and red. He dropped them on the table during standup and said, “We need to talk about compliance before we write another line of code.”
The top sheet was an article about a major video platform getting hit with a $170 million FTC fine for collecting personal information from children without parental consent. Below that was a breakdown of a popular kids’ game that got pulled from both app stores overnight for violating the Apple Kids Category rules. And under that, a summary of a social media company facing a $5 billion settlement, partially because of how it handled children’s data.
Toan had been up since 3 AM reading all of this. He’s our PM — the person who thinks about timelines, budgets, and stakeholder management. But that night, he became our compliance champion. And honestly, he probably saved KidSpark from a very expensive mistake.
I’m Thuan, the Tech Lead on KidSpark. In the previous five parts of this series, we’ve covered why we chose mobile, how we designed the product, how we approached UX for children, our tech stack decisions, and the core features we built. All of that work sits on top of what I’m about to describe in this post. If we had gotten compliance wrong, none of it would have mattered. The app would have been rejected from the stores, or worse — it would have been approved and then we’d be facing regulators.
This is the post Toan has been waiting to read. It’s the one he’d write himself if he had the patience to explain the technical implementation. Instead, he explained the rules, and I’m going to walk you through how we translated those rules into architecture decisions, code patterns, and operational processes.
Let’s get into it.
The Compliance Landscape That Keeps Toan Up at Night
Before we touch any code, you need to understand what’s at stake. This isn’t hypothetical risk. It’s not “maybe someday a regulator will care.” It’s happening right now, to companies much larger and better-funded than ours.
Real consequences, real money. The FTC has been steadily increasing enforcement of children’s privacy violations. In recent years, we’ve seen a major social media platform fined $5 billion — partly for misleading parents about how children’s data was used. A popular video platform paid $170 million for using persistent identifiers to track children and serve them targeted ads. A kids’ gaming company was fined $3 million for collecting children’s email addresses without parental consent. And those are just the headline cases. The FTC has pursued dozens of smaller companies with fines starting at $50,000 per violation — and each child affected counts as a separate violation.
The EU side is even more aggressive. GDPR penalties can reach 4% of annual global turnover or 20 million euros, whichever is higher. Several EU member states have already begun enforcement actions specifically targeting children’s apps. Ireland’s Data Protection Commission has been particularly active, issuing rulings that affect every app available in the EU market.
Beyond fines, there are consequences that don’t show up in headlines. App store removal can happen without warning. Both Apple and Google have pulled kids’ apps that violate their respective policies, sometimes giving developers as little as 14 days to fix issues before permanent removal. The reputational damage is arguably worse than the fine — once you’re known as “that kids’ app that got caught violating privacy rules,” parents will never trust you again.
Toan summarized it in a way that stuck with all of us: “We’re not building an app for adults who can read a privacy policy and make an informed choice. We’re building for children whose parents trust us to keep them safe. That trust is the entire product. Without it, we have nothing.”
He was right. And so we organized our compliance work around three pillars that every kids’ app developer needs to understand.
The Three Pillars:
- US Law (COPPA) — Children’s Online Privacy Protection Act, enforced by the FTC
- EU Law (GDPR-K) — General Data Protection Regulation, Article 8, covering children’s data
- App Store Policies — Apple’s Kids Category requirements and Google Play’s Families Policy
Each pillar has its own rules, its own enforcement mechanisms, and its own gotchas. They overlap in some places and diverge in others. And critically, compliance isn’t a feature you add in Sprint 12 — it’s a foundation you build on from day one. Every architectural decision we made for KidSpark was filtered through a compliance lens first.
COPPA Deep Dive
The Children’s Online Privacy Protection Act is the US federal law that governs how online services collect and use personal information from children under 13. It’s been around since 1998 and was significantly updated in 2013, with further proposed updates working through the regulatory process. If your app is available in the US and directed at children — or if you have actual knowledge that you’re collecting data from children — COPPA applies to you.
Toan brought in a summary from the FTC’s own guidance documents, and we went through it line by line. Here’s what we found.
What COPPA Actually Covers
The definition of “personal information” under COPPA is much broader than most developers expect. It’s not just name, email, and phone number. The FTC’s definition includes:
- First and last name
- Physical address
- Email address (including a parent’s email collected in the context of a child’s activity)
- Phone number
- Social Security number
- Photo, video, or audio file containing a child’s image or voice
- Geolocation precise enough to identify a street name and city
- Persistent identifiers — this is the one that catches people. Device IDs, cookies, IP addresses when used to track behavior across sites or services, advertising IDs — all of these count as personal information under COPPA when collected from children
- Any combination of information that permits the physical or online contacting of a specific individual
That last point is especially important. Even if you don’t collect a child’s name, if you collect their device ID, their usage patterns, and their approximate location, the FTC may consider that combination sufficient to identify a specific child. This is why “we don’t collect PII” is not a defense if you’re tracking persistent identifiers.
Linh and I went through our entire data model after reading this. We found three places where we were inadvertently planning to store device identifiers for children — crash reporting, push notification tokens, and analytics event logging. All three had to be redesigned before we wrote a single line of production code.
Key Requirements
COPPA imposes six core requirements on operators of child-directed services:
1. Verifiable Parental Consent (VPC). Before you collect any personal information from a child under 13, you must obtain verifiable consent from a parent or guardian. Not a checkbox. Not an “I am over 13” button. Actual, verifiable consent from an actual parent.
2. Direct notice to parents. You must provide clear, comprehensive notice to parents about your data practices — what you collect, why you collect it, how you use it, who you share it with, and how parents can exercise their rights.
3. Data minimization. You can only collect personal information that is reasonably necessary for the child’s participation in the activity. You can’t require a child to provide their email address, location, and phone number just to play a math game.
4. Data retention limits. You must retain children’s personal information only as long as reasonably necessary to fulfill the purpose for which it was collected. Then you must delete it using reasonable measures.
5. Parental access and control. Parents have the right to review the personal information you’ve collected from their child, have it deleted, and refuse to permit further collection or use.
6. Reasonable security. You must establish and maintain reasonable procedures to protect the confidentiality, security, and integrity of children’s personal information.
Each of these requirements maps to specific technical decisions in KidSpark. We didn’t treat them as legal abstractions — we treated them as feature requirements, complete with acceptance criteria.
Verifiable Parental Consent Methods
This is where the rubber meets the road. The FTC provides a list of acceptable methods for obtaining verifiable parental consent. Toan made a comparison table that we referenced constantly during development:
| Method | How It Works | Pros | Cons |
|---|---|---|---|
| Email Plus | Parent confirms via email + follow-up action (e.g., delayed confirmation, text message) | Simple to implement, low friction | Lowest assurance level |
| Credit Card | Small charge verified against card records | Higher assurance, card holder is likely an adult | Adds friction, processing cost, requires payment infrastructure |
| Video Call | Live video verification with a trained operator | Highest assurance of identity | Doesn’t scale, expensive, scheduling friction |
| Government ID | Upload a photo ID that gets verified and deleted | High assurance of identity and age | Privacy concerns about storing ID images, complex to implement |
| Knowledge-Based | Questions only a parent would know (e.g., last four of SSN, address history) | Moderate assurance | Limited accuracy, potential for false negatives, data sourcing complexity |
There are also newer methods being explored, including face-scan age estimation and digital identity verification services. But as of our development timeline, the five methods above were the established FTC-approved approaches.
KidSpark’s Consent Approach
We chose a two-tier approach based on practicality:
Free tier: Email Plus. When a parent creates an account and adds a child profile, we send a consent request email to the parent’s verified email address. The email contains a unique verification link that opens a consent form describing exactly what data we collect from the child and how we use it. The parent must click “I Consent” on that form. We then send a follow-up confirmation email 48 hours later that gives the parent a chance to revoke consent. This follow-up is the “plus” in Email Plus — it provides an additional verification step.
Paid tier: Credit card verification occurs naturally. When a parent subscribes to KidSpark Premium, the credit card transaction itself serves as a higher-assurance form of verification. The cardholder is presumed to be an adult. We still send the consent notification email, but the payment transaction provides the primary verification.
Here’s how the consent service works in our codebase:
enum ConsentMethod { emailPlus, creditCard, governmentId }
enum ConsentScope {
learningData, // Lesson progress, scores, learning path
profileData, // Display name, age tier, avatar selection
communicationData // Push notifications to parent device
}
class ConsentRecord {
final String id;
final String parentEmail;
final String childIdentifier; // Internal ID, NOT child's email
final Set<ConsentScope> scopes;
final ConsentMethod method;
final DateTime grantedAt;
final DateTime? revokedAt;
final String ipAddress; // For audit purposes
final String consentVersion; // Version of privacy policy consented to
ConsentRecord({
required this.id,
required this.parentEmail,
required this.childIdentifier,
required this.scopes,
required this.method,
required this.grantedAt,
this.revokedAt,
required this.ipAddress,
required this.consentVersion,
});
bool get isActive => revokedAt == null;
}
class ConsentService {
final ConsentRepository _consentRepository;
final EmailService _emailService;
final AuditLogger _auditLogger;
ConsentService(this._consentRepository, this._emailService, this._auditLogger);
Future<ConsentResult> requestParentalConsent({
required String parentEmail,
required String childProfileId,
required Set<ConsentScope> scopes,
}) async {
// 1. Generate unique consent token
final token = _generateSecureToken();
// 2. Create pending consent record
final pendingConsent = PendingConsent(
token: token,
parentEmail: parentEmail,
childProfileId: childProfileId,
scopes: scopes,
createdAt: DateTime.now(),
expiresAt: DateTime.now().add(Duration(days: 7)),
);
await _consentRepository.storePending(pendingConsent);
// 3. Send consent request email with verification link
await _emailService.sendConsentRequest(
to: parentEmail,
consentUrl: 'https://kidspark.app/consent/verify?token=$token',
scopes: scopes,
privacyPolicyVersion: currentPrivacyPolicyVersion,
);
// 4. Log the consent request for audit trail
await _auditLogger.log(
event: 'consent_requested',
parentEmail: parentEmail,
childProfileId: childProfileId,
scopes: scopes.map((s) => s.name).toList(),
);
return ConsentResult.pending(token);
}
Future<ConsentResult> verifyConsent({
required String token,
required String ipAddress,
}) async {
final pending = await _consentRepository.getPending(token);
if (pending == null || pending.isExpired) {
return ConsentResult.expired();
}
// Create the verified consent record
final consent = ConsentRecord(
id: _generateId(),
parentEmail: pending.parentEmail,
childIdentifier: pending.childProfileId,
scopes: pending.scopes,
method: ConsentMethod.emailPlus,
grantedAt: DateTime.now(),
ipAddress: ipAddress,
consentVersion: currentPrivacyPolicyVersion,
);
await _consentRepository.store(consent);
await _consentRepository.removePending(token);
// Schedule follow-up confirmation email (Email Plus requirement)
await _emailService.scheduleFollowUpConfirmation(
to: pending.parentEmail,
consentId: consent.id,
sendAt: DateTime.now().add(Duration(hours: 48)),
);
await _auditLogger.log(
event: 'consent_granted',
consentId: consent.id,
parentEmail: consent.parentEmail,
childProfileId: consent.childIdentifier,
);
return ConsentResult.granted(consent);
}
Future<void> revokeConsent({
required String consentId,
required String parentEmail,
}) async {
final consent = await _consentRepository.getById(consentId);
if (consent == null || consent.parentEmail != parentEmail) {
throw UnauthorizedException('Cannot revoke this consent');
}
await _consentRepository.markRevoked(consentId, DateTime.now());
// Trigger data handling: stop collecting, schedule cleanup
await _dataLifecycleService.onConsentRevoked(
childProfileId: consent.childIdentifier,
revokedScopes: consent.scopes,
);
await _auditLogger.log(
event: 'consent_revoked',
consentId: consentId,
parentEmail: parentEmail,
);
}
}
Every consent action — request, grant, revocation — is logged immutably. Toan insisted on this after reading about companies that couldn’t prove they had obtained valid consent when challenged by the FTC. The audit trail is your defense. Without it, you’re relying on “trust us, we asked” — and regulators do not find that persuasive.
One detail worth calling out: the consentVersion field. When we update our privacy policy, existing consents are tied to the version of the policy that was in effect when consent was granted. If we make a material change to data practices, we need to re-request consent. This is a requirement that many apps miss, and it’s a common source of enforcement actions.
GDPR-K: The European Standard
If COPPA is the US baseline, the EU’s General Data Protection Regulation takes everything further. Article 8 of GDPR specifically addresses “conditions applicable to child’s consent in relation to information society services,” and it introduces requirements that go beyond what COPPA demands.
Toan spent an entire weekend reading the Article 29 Working Party guidelines on children’s consent, and he came back with a document titled “Things That Are Legal Under COPPA But Illegal Under GDPR.” It was six pages long.
Key Differences from COPPA
Age threshold varies by member state. While COPPA draws a bright line at 13, GDPR Article 8 allows each EU member state to set its own threshold between 13 and 16. Germany, France, and the Netherlands use 16. Spain and the UK use 13. Italy uses 14. Ireland uses 16. If your app is available across the EU, you need to handle different thresholds — or default to the strictest standard.
We chose to default to 16 for all EU users. It’s simpler to implement, and it eliminates the complexity of geo-based age threshold detection. Toan’s logic was straightforward: “If we build for the strictest rule, we’re automatically compliant with all the less strict ones. The engineering cost of implementing variable thresholds is higher than the business cost of treating 14-year-olds the same as 10-year-olds.”
Broader definition of personal data. GDPR defines personal data as “any information relating to an identified or identifiable natural person.” This is broader than COPPA’s “personal information” definition. Under GDPR, even pseudonymized data can be considered personal data if there’s a reasonable possibility of re-identification. Online identifiers — IP addresses, cookie identifiers, device fingerprints — are explicitly called out as personal data.
Right to erasure is technically mandatory. Under COPPA, parents can request deletion of their child’s data and you must comply. Under GDPR, the “right to be forgotten” (Article 17) requires that deletion be technically feasible and complete. You can’t just mark a record as deleted in your database — you need to actually remove the data from all systems, backups, and third-party processors. And you need to do it within one month of the request.
This had significant implications for our architecture. We had to design our data storage so that child data could be completely isolated and deleted without affecting aggregate analytics or other users’ data. I’ll cover the specifics in the data architecture section below.
Data portability. Parents can request a complete export of all data you hold about their child, in a commonly used, machine-readable format. We chose JSON for KidSpark’s data exports — it’s human-readable, widely supported, and easy to generate from our data models.
Legitimate interest is harder to claim. Under GDPR, you can sometimes process data without consent if you have a “legitimate interest.” But Recital 38 states that children “merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned.” In practice, this means regulators are extremely skeptical of legitimate interest claims when children’s data is involved. For KidSpark, we rely entirely on parental consent — we never invoke legitimate interest for children’s data.
Data Protection Impact Assessment (DPIA) required. Article 35 requires a DPIA when processing is “likely to result in a high risk to the rights and freedoms of natural persons.” Processing children’s data on a large scale almost certainly triggers this requirement. A DPIA is a structured assessment of the necessity and proportionality of data processing, the risks to individuals, and the measures you’ve put in place to mitigate those risks.
Toan led our DPIA process with input from a privacy consultant we brought in for two weeks. The assessment document runs about 40 pages and covers every data flow in the application. It’s not a fun document to write. It is, however, an extremely useful exercise — it forced us to map every piece of data we collect, justify why we need it, and document how we protect it.
Penalties that get attention. GDPR violations can result in fines up to 4% of annual global turnover or 20 million euros, whichever is higher. For a startup like KidSpark, the 20 million euro figure is the relevant one — and it would be existentially destructive. Even the lower tier of GDPR fines (up to 2% of turnover or 10 million euros) would end most small companies.
KidSpark’s GDPR Approach
We adopted three principles that guided every GDPR-related decision:
1. Default to the strictest standard. If COPPA says one thing and GDPR says something stricter, we follow the stricter rule for all users. This is more conservative than necessary — you could theoretically apply different standards based on user location — but it dramatically simplifies implementation and eliminates the risk of misidentifying a user’s jurisdiction.
2. Privacy by design and by default. This is an explicit GDPR requirement (Article 25), and we took it literally. Every new feature starts with the question: “Does this require personal data? Can we achieve the same result without it?” The default state of KidSpark is maximum privacy — features that require additional data collection are opt-in, not opt-out.
3. Data minimization as a core principle. We collect the minimum data necessary for each feature to function. No “nice to have” data fields. No “we might need this later” collection. If we can’t articulate a specific, current use for a piece of data, we don’t collect it. Period.
Here’s a concrete example. Early in development, Hana wanted to show each child a personalized greeting: “Good morning, Sarah! Ready to learn today?” To do that, we’d need to store the child’s first name and potentially their time zone. Toan flagged this immediately. Under data minimization, we needed to justify collecting the child’s name for this purpose.
After discussion, we decided that the display name is set by the parent during profile creation and is covered by our consent scope for profile data. But we use a generic identifier internally — the child’s display name is never used as a database key, never included in analytics events, and never sent to third-party services. It exists only in the profile data that the parent controls and can delete at any time.
Apple App Store Kids Category
If COPPA and GDPR are the legal floor, Apple’s Kids Category requirements are the ceiling — and in many ways, they’re stricter than either law. Apple doesn’t just care about legal compliance. They care about user experience, parental trust, and brand reputation. And they enforce their rules through the most effective mechanism possible: they won’t let your app into the store if you violate them.
Hana discovered this firsthand when she was researching reference apps for our UX work. She found several kids’ apps that had been unceremoniously removed from the App Store after Apple tightened its Kids Category policies. Developers had no warning. One day their app was live with thousands of downloads. The next day, gone.
Requirements That Apple Strictly Enforces
No third-party analytics SDKs. This is the rule that catches the most developers by surprise. If your app is in the Kids Category, you cannot include third-party analytics SDKs. Not Firebase Analytics. Not Mixpanel. Not Amplitude. Not Segment. Not even if you’ve disabled them for child users. Not even if you only use them for crash reporting. If the SDK binary is in your app bundle, Apple will reject it.
This forced us to rethink our entire analytics strategy. We couldn’t use any of the standard tools that mobile developers rely on. Instead, we built a lightweight, first-party analytics system that sends aggregate, non-identifying events to our own backend. No device identifiers. No user identifiers. No persistent tracking. Just anonymous event counts: “a lesson was completed,” “a quiz was started,” “the app was opened.” We can measure feature usage without identifying any individual child.
// First-party analytics - no PII, no persistent identifiers
class KidSafeAnalytics {
final ApiClient _apiClient;
KidSafeAnalytics(this._apiClient);
Future<void> trackEvent({
required String eventName,
Map<String, dynamic>? properties,
}) async {
// Strip any potential PII before sending
final safeProperties = _sanitizeProperties(properties);
await _apiClient.post('/analytics/events', body: {
'event': eventName,
'timestamp': DateTime.now().toIso8601String(),
'app_version': appVersion,
'platform': Platform.isIOS ? 'ios' : 'android',
'age_band': currentAgeBand, // '5-under', '6-8', '9-11'
// NO device ID, NO user ID, NO session ID
...?safeProperties,
});
}
Map<String, dynamic> _sanitizeProperties(Map<String, dynamic>? props) {
if (props == null) return {};
final sanitized = Map<String, dynamic>.from(props);
// Remove any keys that could contain PII
const forbiddenKeys = [
'user_id', 'device_id', 'email', 'name',
'ip', 'location', 'session_id', 'advertising_id',
];
for (final key in forbiddenKeys) {
sanitized.remove(key);
}
return sanitized;
}
}
No third-party advertising. Kids Category apps cannot display third-party advertisements. Period. No banner ads, no interstitials, no rewarded video ads, no native ads. This eliminates an entire monetization model and is one of the reasons kids’ apps tend toward subscription or premium pricing.
No links out of the app without a parental gate. Every link that could take a child outside of the app — to a website, to the App Store, to Safari, to any external URL — must be gated behind a parental gate. This includes “Rate This App” prompts, social media sharing buttons, links to your privacy policy (though the privacy policy must still be accessible via the App Store listing), and even help documentation if it opens in a browser.
No in-app purchases accessible to children without a parental gate. If your app offers in-app purchases, the purchase flow must be gated. The child should not be able to initiate a purchase on their own.
Age band declaration. When you submit to the Kids Category, you must declare an age band: 5 and Under, 6-8, or 9-11. Apple’s review team will evaluate your app against the expectations for that age band. A 5-and-Under app with text-heavy interfaces will be rejected. A 9-11 app with mature themes will be rejected.
Human review is more rigorous. Apple’s review team pays extra attention to Kids Category submissions. Review times may be longer, and reviewers will actively test parental gates, check for third-party SDK calls, and verify that the app behaves as described in the metadata.
Parental Gate Implementation
The parental gate is one of the most important UI patterns in kids’ app development. It needs to be solvable by an adult but not by a child. Apple doesn’t dictate the exact mechanism, but they do reject gates that are too easy (e.g., “press and hold for 3 seconds” — kids can do that) or too obscure (e.g., requiring the parent to enter a password that they’ve forgotten).
The most common approach is a math problem that requires adult-level arithmetic. Here’s how we implemented it:
class ParentalGate extends StatefulWidget {
final VoidCallback onVerified;
final String actionDescription;
const ParentalGate({
required this.onVerified,
required this.actionDescription,
Key? key,
}) : super(key: key);
@override
State<ParentalGate> createState() => _ParentalGateState();
}
class _ParentalGateState extends State<ParentalGate> {
late int _a;
late int _b;
final _controller = TextEditingController();
int _attempts = 0;
String? _errorText;
@override
void initState() {
super.initState();
_generateProblem();
}
void _generateProblem() {
final random = Random();
// Numbers between 10 and 29 — multiplication requires
// adult-level arithmetic that young children can't solve
_a = random.nextInt(20) + 10;
_b = random.nextInt(20) + 10;
_controller.clear();
_errorText = null;
}
void _checkAnswer() {
final answer = int.tryParse(_controller.text.trim());
if (answer == _a * _b) {
Navigator.of(context).pop();
widget.onVerified();
} else {
_attempts++;
if (_attempts >= 3) {
// After 3 failed attempts, generate a new problem
setState(() {
_generateProblem();
_attempts = 0;
_errorText = 'Too many attempts. Try a new problem.';
});
} else {
setState(() {
_errorText = 'That\'s not right. Please try again.';
_controller.clear();
});
}
}
}
@override
Widget build(BuildContext context) {
return AlertDialog(
title: const Text('Parent Verification'),
content: Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Text(
'To ${widget.actionDescription}, please solve this problem:',
style: Theme.of(context).textTheme.bodyMedium,
),
const SizedBox(height: 16),
Text(
'What is $_a x $_b?',
style: Theme.of(context).textTheme.headlineSmall,
),
const SizedBox(height: 12),
TextField(
controller: _controller,
keyboardType: TextInputType.number,
autofocus: true,
decoration: InputDecoration(
hintText: 'Enter the answer',
errorText: _errorText,
border: const OutlineInputBorder(),
),
onSubmitted: (_) => _checkAnswer(),
),
],
),
actions: [
TextButton(
onPressed: () => Navigator.of(context).pop(),
child: const Text('Cancel'),
),
ElevatedButton(
onPressed: _checkAnswer,
child: const Text('Verify'),
),
],
);
}
@override
void dispose() {
_controller.dispose();
super.dispose();
}
}
We use this gate in front of every action that could lead outside the app or initiate a purchase. Hana designed it to feel non-intrusive for parents while being genuinely unsolvable for a 6-year-old. The multiplication of two-digit numbers is something most adults can do with a moment of thought, but it’s well beyond the math ability of children in our target age range (5-11).
Common Apple Rejection Reasons
During our submission process, we learned from other developers’ experiences. Here are the most common rejection reasons for Kids Category apps:
Including a third-party analytics SDK — even if it’s disabled. Apple’s automated scanning checks for SDK binaries in your app bundle. If Firebase Analytics is linked in your project, even if you never initialize it, the review will flag it. We had to completely remove several SDK dependencies from our project files, not just stop calling their APIs.
External links without a parental gate. This includes the “Rate This App” prompt that many apps trigger automatically. In the Kids Category, even the system rating dialog needs a parental gate in front of it. We also had to gate our “Help” button because it opened a web view.
Collecting device identifiers. Even accessing the IDFA (Identifier for Advertisers) or IDFV (Identifier for Vendors) in a Kids Category app can trigger rejection. We removed all code that accessed device identifiers.
Missing or inaccessible privacy policy. Your privacy policy must be linked from the App Store listing and must be accessible to parents. It needs to be written in plain language — not legal boilerplate that nobody reads.
In-app purchases without a parental gate. If a child can tap a button that leads to a purchase sheet, Apple will reject it. The gate must appear before the system purchase dialog.
Google Play Families Policy
Google Play has its own set of requirements for apps that target children, and while there’s significant overlap with Apple’s policies, there are important differences. Toan mapped out the distinctions in a side-by-side comparison that we referenced throughout development.
Requirements
Target audience declaration. When you publish on Google Play, you must declare whether your app’s target audience includes children. If it does, you’re subject to the Families Policy. There’s no way around this for a kids’ learning app — we are unambiguously targeting children.
Families Self-Certified Ads SDK. Even if you don’t show ads, you must complete the Families Policy declaration regarding advertising. If you do include ads (which we don’t), you can only use ad SDKs that have been self-certified for compliance with the Google Play Families Policy. Google maintains a public list of certified ad SDKs.
Teacher Approved program. Google offers a “Teacher Approved” badge for apps that pass an additional review by a panel of educators and child development specialists. This badge is optional, but it’s incredibly valuable — it appears prominently in search results and the Family section of the Play Store. We applied for it and received it after a three-week review process. The requirements include educational value assessment, age-appropriateness review, and a design evaluation for child engagement.
Content rating questionnaire (IARC). All apps on Google Play must complete the International Age Rating Coalition questionnaire, which generates age ratings for multiple markets simultaneously. For kids’ apps, you need to answer questions about violence, sexual content, language, controlled substances, and user-generated content. Our responses resulted in an “Everyone” rating across all markets.
Data Safety section. Google Play’s Data Safety section requires you to declare, in detail, every type of data your app collects, whether that data is shared with third parties, whether it’s encrypted in transit, and whether users can request deletion. This declaration is publicly visible on your Play Store listing, and Google reviews it against your app’s actual behavior. Discrepancies can result in rejection or removal.
No behavioral advertising. Apps that target children cannot use behavioral advertising — meaning you cannot track a child’s behavior to serve them personalized ads. Contextual advertising (serving ads based on the current content, not the user’s profile) is technically permitted, but we chose to avoid all advertising entirely.
Privacy policy. Your privacy policy must be linked from the Play Store listing, accessible within the app, and written in language that parents can understand. It must specifically address children’s data practices.
Key Differences from Apple
Google’s approach differs from Apple’s in several important ways, and understanding these differences saved us from making incorrect assumptions.
More flexible on third-party SDKs. Unlike Apple, Google doesn’t categorically ban all third-party analytics SDKs. However, any SDK you use must comply with the Families Policy, which means it can’t collect personal information from children or use persistent identifiers for tracking. In practice, most popular analytics SDKs don’t meet these requirements, so the practical result is similar to Apple’s ban. We use the same first-party analytics solution on both platforms.
Teacher Approved is unique to Google. Apple has no equivalent program. The Teacher Approved badge is one of the most effective trust signals available on Google Play, and it’s worth the additional review effort if your app has genuine educational value.
Data Safety section is more detailed than Apple’s privacy labels. Google requires more granular disclosure of data practices than Apple’s privacy nutrition labels. You need to declare not just what data you collect, but the specific purpose for each type of data, whether it’s required or optional, and whether it’s processed ephemerally or persisted.
Enforcement style is different. Apple tends to enforce at review time — they catch issues before your app goes live. Google tends to enforce after publication — they may approve an app that later gets flagged by automated scanning or user reports. This means a Google Play approval doesn’t necessarily mean you’re in the clear. You need to be genuinely compliant, not just compliant enough to pass initial review.
We built our compliance processes to satisfy both platforms simultaneously. The stricter requirement wins in every case.
Authentication Without Collecting PII from Children
One of the earliest architectural decisions we made — and one that Toan drove with particular intensity — was how to handle authentication for child users. The standard approach to user authentication involves email addresses, passwords, and sometimes phone numbers. Every one of those is personal information that would trigger COPPA consent requirements and GDPR obligations.
Our solution: children don’t authenticate at all. Not in the traditional sense.
Here’s the flow:
- Parent creates an account using standard email/password authentication or OAuth (Sign in with Apple, Google Sign-In). The parent is an adult. Standard authentication applies.
- Parent creates child profiles within their account. The parent provides a display name (which can be a nickname or pseudonym), selects an age tier (5-under, 6-8, 9-11), and chooses an avatar from our predefined set. This is covered by the parental consent we’ve already obtained.
- Child selects their profile from a visual grid of avatars. No typing required. No credentials needed. The child taps their avatar on the device that the parent has already authenticated.
- Child enters a PIN — a 4-digit code set by the parent. This is not a password. It’s a simple access control to prevent siblings from using each other’s profiles. The PIN is stored locally on the device as a hashed value and is never transmitted to our servers.
- No email, no password, no social login for children. The child never provides any credential that could be considered personal information.
// Child profile selection screen
interface ChildProfile {
id: string;
displayName: string;
avatarId: string;
ageBand: '5-under' | '6-8' | '9-11';
}
const ChildProfileSelector: React.FC = () => {
const { profiles } = useChildProfiles();
const navigation = useNavigation();
const handleProfileSelect = (profileId: string) => {
navigation.navigate('PinEntry', { profileId });
};
return (
<View style={styles.container}>
<Text style={styles.greeting}>Who's learning today?</Text>
<View style={styles.profileGrid}>
{profiles.map((profile) => (
<TouchableOpacity
key={profile.id}
onPress={() => handleProfileSelect(profile.id)}
style={styles.profileCard}
accessibilityLabel={`${profile.displayName}'s profile`}
accessibilityRole="button"
>
<Avatar avatarId={profile.avatarId} size={80} />
<Text style={styles.profileName}>{profile.displayName}</Text>
</TouchableOpacity>
))}
</View>
</View>
);
};
// PIN entry screen - no PII transmitted
const PinEntryScreen: React.FC<{ profileId: string }> = ({ profileId }) => {
const [pin, setPin] = useState('');
const [error, setError] = useState<string | null>(null);
const handlePinComplete = async (enteredPin: string) => {
// PIN verification happens locally - hash comparison only
const storedHash = await SecureStorage.get(`pin_hash_${profileId}`);
const enteredHash = hashPin(enteredPin);
if (enteredHash === storedHash) {
// Set active child profile in local session
await SessionManager.setActiveChildProfile(profileId);
navigation.navigate('ChildHome');
} else {
setError('That PIN is not right. Ask your parent for help!');
setPin('');
}
};
return (
<View style={styles.container}>
<Text style={styles.prompt}>Enter your secret code!</Text>
<PinInput
length={4}
value={pin}
onChange={setPin}
onComplete={handlePinComplete}
style={styles.pinInput}
/>
{error && <Text style={styles.errorText}>{error}</Text>}
<TouchableOpacity
onPress={() => navigation.goBack()}
style={styles.backButton}
>
<Text style={styles.backText}>Go back</Text>
</TouchableOpacity>
</View>
);
};
This architecture achieves several compliance goals simultaneously. Children never provide personal information during the authentication process. The parent controls all profile data and can modify or delete it at any time. The PIN is a local security measure that never leaves the device — it’s not personal information under either COPPA or GDPR because it’s not transmitted to or stored on our servers. And the entire flow is designed to be usable by young children: big avatar images, minimal text, clear visual feedback.
Linh implemented the avatar selection screen with extra care for accessibility. Each avatar has a descriptive label for screen readers, the touch targets are oversized for small fingers, and the PIN entry uses large, well-spaced digits. Hana tested the flow with children in our target age range and found that even 5-year-olds could reliably select their profile and enter a 4-digit PIN with minimal adult assistance.
One subtle point: the child’s “session” is entirely local. When a child selects their profile and enters their PIN, the app loads their learning data from the local cache. The app syncs data with our servers using the parent’s authentication token, never the child’s profile identifier directly. From our server’s perspective, all requests come from an authenticated parent account. The child profile ID is metadata attached to learning events, but it’s an opaque internal identifier — not a name, not an email, not anything that identifies a real person.
Data Architecture for Compliance
All of the compliance requirements we’ve discussed have architectural implications. You can’t bolt compliance onto an existing data architecture. You need to design your data storage, data flows, and data lifecycle with compliance as a primary constraint from the beginning.
Toan and I spent a full week mapping out our data architecture before Linh wrote the first line of database code. Here’s what we arrived at.
Data Classification
Every piece of data in KidSpark falls into one of five categories, and each category has its own storage, retention, encryption, and deletion rules:
| Category | Examples | Storage | Retention | Encryption |
|---|---|---|---|---|
| Child Learning Data | Lesson scores, quiz answers, progress milestones, learning path state | Server + local cache | Active account + 90 days after deletion | AES-256 at rest, TLS 1.3 in transit |
| Child Profile Data | Display name, age tier, avatar selection, PIN hash (local only) | Server (except PIN) | Until parent deletes profile | AES-256 at rest, TLS 1.3 in transit |
| Parent Account Data | Email, password hash, payment tokens, subscription status | Server only | Account lifetime + legal retention period | AES-256 at rest, bcrypt password hashing, TLS 1.3 in transit |
| Analytics (Aggregate Only) | Lesson completion rates, feature usage counts, crash-free session rates | Server only | 2 years rolling | N/A — contains no PII |
| Consent Records | Consent proofs, timestamps, consent versions, audit logs | Server only | Minimum 5 years (legal requirement) | AES-256 at rest, immutable storage, TLS 1.3 in transit |
The key design principle is data isolation. Child learning data is stored in a way that it can be completely deleted without affecting aggregate analytics. We achieve this through a separation between raw data (which contains child profile IDs and is subject to deletion) and aggregate data (which contains only counts and averages and is never subject to deletion because it contains no personal information).
Right to Delete Implementation
The right to delete — required by both COPPA and GDPR — is one of the most technically challenging compliance requirements. It sounds simple: parent asks you to delete their child’s data, you delete it. In practice, data exists in multiple places: your primary database, your backup systems, your CDN cache, your log files, your third-party processors’ systems, and your analytics pipeline. Deleting it from all of these in a verifiable way requires careful architecture.
Here’s our deletion flow:
class DataDeletionService {
final ChildProfileRepository _profileRepo;
final LearningDataRepository _learningRepo;
final ConsentRepository _consentRepo;
final BackupService _backupService;
final AuditLogger _auditLogger;
final NotificationService _notificationService;
Future<DeletionReceipt> deleteChildProfile({
required String parentId,
required String childProfileId,
required bool immediateDelete,
}) async {
// 1. Verify the parent owns this child profile
final profile = await _profileRepo.getById(childProfileId);
if (profile == null || profile.parentId != parentId) {
throw UnauthorizedException('Cannot delete this profile');
}
// 2. Create deletion request record
final deletionRequest = DeletionRequest(
id: generateId(),
parentId: parentId,
childProfileId: childProfileId,
requestedAt: DateTime.now(),
scheduledDeletionDate: immediateDelete
? DateTime.now()
: DateTime.now().add(Duration(days: 30)), // Grace period
status: DeletionStatus.pending,
);
await _profileRepo.storeDeletionRequest(deletionRequest);
if (!immediateDelete) {
// 3a. Grace period: notify parent, allow cancellation
await _notificationService.sendDeletionScheduledNotice(
parentEmail: profile.parentEmail,
childDisplayName: profile.displayName,
deletionDate: deletionRequest.scheduledDeletionDate,
cancellationUrl:
'https://kidspark.app/deletion/cancel?id=${deletionRequest.id}',
);
await _auditLogger.log(
event: 'deletion_scheduled',
parentId: parentId,
childProfileId: childProfileId,
scheduledDate: deletionRequest.scheduledDeletionDate.toIso8601String(),
);
return DeletionReceipt(
requestId: deletionRequest.id,
status: DeletionStatus.scheduled,
scheduledDate: deletionRequest.scheduledDeletionDate,
);
}
// 3b. Immediate deletion
return await _executeDeletion(deletionRequest);
}
Future<DeletionReceipt> _executeDeletion(DeletionRequest request) async {
final steps = <DeletionStep>[];
// Step 1: Delete all learning data (scores, progress, quiz answers)
final learningResult = await _learningRepo.deleteAllForChild(
request.childProfileId,
);
steps.add(DeletionStep(
system: 'learning_data',
recordsDeleted: learningResult.count,
completedAt: DateTime.now(),
));
// Step 2: Delete child profile data
await _profileRepo.hardDelete(request.childProfileId);
steps.add(DeletionStep(
system: 'profile_data',
recordsDeleted: 1,
completedAt: DateTime.now(),
));
// Step 3: Mark backup data for exclusion
// (Backups can't be modified, but we flag the profile ID
// so it's excluded from any backup restoration)
await _backupService.addToExclusionList(request.childProfileId);
steps.add(DeletionStep(
system: 'backup_exclusion',
recordsDeleted: 0,
completedAt: DateTime.now(),
));
// Step 4: Update deletion request status
await _profileRepo.updateDeletionRequest(
request.id,
DeletionStatus.completed,
steps,
);
// Step 5: Consent records are NOT deleted
// (Legal requirement to retain proof of consent for 5+ years)
// But we anonymize the child identifier in consent records
await _consentRepo.anonymizeChildIdentifier(request.childProfileId);
// Step 6: Generate deletion receipt
final receipt = DeletionReceipt(
requestId: request.id,
status: DeletionStatus.completed,
completedAt: DateTime.now(),
steps: steps,
);
// Step 7: Send confirmation to parent
await _notificationService.sendDeletionConfirmation(
parentEmail: request.parentEmail,
receipt: receipt,
);
await _auditLogger.log(
event: 'deletion_completed',
parentId: request.parentId,
childProfileId: request.childProfileId,
steps: steps.map((s) => s.toJson()).toList(),
);
return receipt;
}
}
Several design decisions in this code deserve explanation.
The 30-day grace period. When a parent requests deletion through the app’s settings, we default to a 30-day grace period. This is because accidental deletions happen — a child playing with the parent’s phone might trigger a deletion, or a parent might change their mind. After 30 days, the deletion executes automatically. Parents can also choose immediate deletion, which skips the grace period. GDPR requires deletion within one month, so our 30-day grace period is within that window.
Backup exclusion rather than backup modification. It’s technically impractical to modify individual records in encrypted backups. Instead, we maintain an exclusion list: when a backup is restored (which should be a rare event), the restoration process checks the exclusion list and skips any records belonging to deleted profiles. This is an accepted approach under GDPR guidance.
Consent records are retained but anonymized. We have a legal obligation to prove that we obtained valid consent for data we processed. We can’t delete the consent records themselves. But we anonymize the child identifier in those records so that the consent proof exists without being linked to a specific child.
The deletion receipt. We generate a formal receipt that the parent can keep for their records. This receipt documents exactly what was deleted, when, and from which systems. It’s a transparency measure that builds trust and provides evidence of compliance.
The Compliance Checklist
During our third week of development, Toan created a comprehensive checklist that he printed out and taped to the wall next to Linh’s monitor. He called it the “Toan Test” — every feature, every pull request, every design decision had to pass through it. Over time, we internalized the checklist to the point where we didn’t need to look at it anymore, but in the early weeks, it was invaluable.
Here’s the complete checklist, organized by development phase.
Before Development
- Determine target audience age ranges and declare age bands (5-under, 6-8, 9-11)
- Choose parental consent method(s) for each tier of service
- Design data-minimized architecture — document every data field and justify its necessity
- Write a child-friendly, parent-readable privacy policy in plain language
- Conduct Data Protection Impact Assessment (DPIA) for GDPR compliance
- Identify all third-party SDKs and verify Kids Category / Families Policy compatibility
- Establish data classification scheme (what’s PII, what’s aggregate, what’s exempt)
- Define data retention periods for each category of data
- Design authentication flow that avoids collecting PII from children
- Plan data export format for data portability requests
During Development
- No third-party analytics SDKs in the app bundle (Apple Kids Category requirement)
- Parental gates on all external links — every URL that opens outside the app
- Parental gates on all purchase flows — before the system payment dialog
- Parental gate on app rating prompts — before the system review dialog
- No behavioral tracking of children — no persistent identifiers, no cross-session tracking
- Encrypt all child data at rest (AES-256) and in transit (TLS 1.3)
- Implement right-to-delete with complete data removal across all systems
- Implement consent verification flow with audit trail
- No persistent identifiers shared with third parties
- No device identifiers accessed or stored (IDFA, IDFV, Android Advertising ID)
- No geolocation collection from child users
- Data export functionality for parent requests (JSON format)
- Age-appropriate error messages — no technical jargon shown to children
- Local-only PIN storage — never transmit child PINs to server
Before App Store Submission
- Privacy policy accessible from app settings AND store listing
- Data Safety section (Google Play) completed with accurate disclosures
- Privacy Nutrition Labels (Apple) completed with accurate disclosures
- Age rating questionnaire (IARC) completed for all target markets
- Google Play Families Policy declaration submitted
- Apple Kids Category metadata and age band selected
- All parental gates tested — verify they appear before every gated action
- Verify no third-party SDK binaries in the app bundle (use dependency audit tools)
- Verify no network requests to third-party analytics or advertising endpoints
- Test complete deletion flow — request, grace period, execution, receipt
- Test consent flow — request, grant, revocation, re-request
- Test data export — verify completeness and format
- Review all push notification content — no PII in notification payloads
- Verify that child sessions don’t create server-side logs containing PII
Post-Launch (Ongoing)
- Monitor for changes in COPPA rules, GDPR guidance, and store policies
- Respond to parental data access requests within 30 days
- Respond to deletion requests within 30 days (GDPR requirement)
- Re-request consent when privacy policy changes materially
- Conduct annual privacy review and update DPIA
- Train new team members on compliance requirements before they touch code
- Audit third-party dependencies regularly for compliance changes
Toan reviewed this checklist before every major milestone. When we added push notifications in Sprint 8, he flagged that notification payloads could contain child data — a lesson title like “Sarah’s Math Progress” would include the child’s name in a payload that passes through Apple’s and Google’s push notification servers. We changed all child-facing notifications to use generic language: “Time for a new lesson!” instead of “Sarah, your math lesson is ready!”
That’s the kind of detail that a compliance-focused PM catches and an engineering-focused team might miss.
Best Practices for Passing App Store Review (Kids Apps)
After two rejections, three revision cycles, and dozens of conversations with other kids app developers, we compiled a set of best practices specifically for getting kids apps through both Apple and Google’s review processes. These go beyond the technical checklist above — they’re the operational and strategic practices that separate a smooth submission from weeks of back-and-forth rejections.
Pre-Submission Audit
Run a full SDK audit before every submission. Use flutter pub deps (or the equivalent for your framework) to generate a complete dependency tree, including transitive dependencies. For each dependency, verify:
- Does it access device identifiers (IDFA, IDFV, Android Advertising ID)?
- Does it make network requests to third-party servers?
- Is it on Google’s certified SDK list for Families apps?
- Does it collect any data that Apple would classify as tracking?
We built a simple script that parses our dependency tree and flags any package that isn’t on our approved list. Running this before every submission has prevented two potential rejections.
Network traffic analysis is non-negotiable. Before submitting, run your app through a network proxy (Charles Proxy on macOS, mitmproxy on any platform) and record every outbound request for a complete user session — onboarding, lesson, quiz, parent dashboard, settings. Map each request to your Data Safety / Privacy Nutrition Label declarations. Apple and Google both audit actual network behavior against your declared data practices. Any discrepancy is a rejection.
Test your parental gates exhaustively. Apple’s reviewers will attempt to bypass every parental gate in your app. Test these scenarios:
- Can a child reach an external link without solving the gate?
- Can a child initiate a purchase without the gate appearing?
- Does the gate appear before the system rating dialog?
- Can the gate be bypassed by rotating the device or triggering a notification?
- Does the gate regenerate a new challenge after failed attempts?
Submission Strategy
Stagger your submissions. Don’t submit to Apple and Google simultaneously on your first release. Submit to one store first, resolve any issues, then submit to the other. We submitted to Google Play first because their review is faster, which gave us a quicker feedback loop. Once we were approved on Google, we submitted to Apple with higher confidence. For subsequent updates, simultaneous submission is fine because you’ve already resolved the common issues.
Apple-specific: submit Tuesday or Wednesday. Apple’s review team works weekdays. Friday submissions often sit until Monday. Submitting early in the week maximizes your chance of a same-week review and gives you business days to respond to any rejection.
Write reviewer notes like a cover letter. The “Notes for Review” field is your opportunity to preempt questions. For kids apps, always include:
- A demo parent account with pre-populated child profiles
- Step-by-step instructions for finding parental gates, settings, and purchase flows
- An explicit statement about your data practices: “This app does not use third-party analytics. All data collection is first-party and described in our privacy policy.”
- Screenshots of every parental gate with annotations
- A link to your privacy policy
Prepare for at least one rejection. Apple rejects approximately 40% of first-time Kids Category submissions. Google’s rejection rate for Families Policy apps is lower but still significant. Build rejection response time into your launch timeline — add two weeks of buffer for review iterations.
Post-Approval Maintenance
Monitor policy changes proactively. Both Apple and Google update their kids app policies regularly, sometimes with minimal notice. Subscribe to:
- Apple Developer News for App Store Review Guideline changes
- Google Play Policy Center updates for Families Policy changes
- FTC press releases for COPPA enforcement actions and rule changes
- EU regulatory updates for GDPR-K and Digital Services Act developments
Re-audit after every dependency update. When you update a dependency — even a minor version bump — re-run your SDK audit and network traffic analysis. A new version of a logging library might start collecting device identifiers that the previous version didn’t. A minor update to a UI framework might bundle an analytics module that wasn’t there before. We caught this exact scenario with a popular image caching library that added telemetry in a patch release.
Keep your privacy policy synchronized. Your privacy policy, your Apple Privacy Nutrition Labels, and your Google Data Safety section must all describe the same data practices. When you change any data collection behavior, update all three simultaneously. Discrepancies between these documents are a common trigger for post-approval enforcement actions.
Lessons Learned: What We Got Wrong
I’d be painting an unrealistic picture if I said we got everything right on the first try. We didn’t. Here are the mistakes we made and how we fixed them.
Mistake 1: We initially included Firebase Crashlytics. It’s such a standard tool for mobile developers that Linh added it to the project in the first week without thinking about it. We didn’t call any Firebase Analytics APIs, so we assumed we were fine. We weren’t. Apple’s automated scanning detected the Firebase SDK binary during our first Kids Category submission attempt, and we were rejected. Removing Firebase required replacing our crash reporting with a custom solution that captures stack traces and sends them to our own backend — without any device identifiers attached.
Mistake 2: We forgot about deep links. Our marketing team wanted to use deep links for re-engagement campaigns. A deep link from an email that opens the app and navigates directly to a specific lesson sounds harmless. But if the deep link includes a child profile ID in the URL, that’s a persistent identifier being transmitted outside the app. We had to redesign deep links to route through the parent’s authenticated session — the link opens the app, the parent authenticates, and then the app navigates to the content.
Mistake 3: Our initial data model wasn’t deletion-friendly. We originally stored learning data with foreign keys directly to child profiles. When we tried to implement the right-to-delete, we discovered that deleting a child profile would cascade into deleting aggregate statistics that we needed to retain. We had to refactor the data model to separate identifiable records (which get deleted) from anonymous aggregates (which get retained). This refactoring took almost two weeks and would have been much easier if we’d designed for it from the start.
Mistake 4: We didn’t account for screenshots in support tickets. When parents contacted support with issues, our support tool encouraged them to attach screenshots. Some of those screenshots showed their child’s profile screen, which included the child’s display name and avatar. Our support system stored those screenshots on a third-party platform that wasn’t covered by our data processing agreements. We had to switch to a support platform with appropriate data processing terms and add instructions asking parents not to include screenshots of child profiles.
Every one of these mistakes was caught before it caused a compliance incident, but some of them were caught uncomfortably late. The lesson is clear: compliance review needs to happen at the design stage, not the testing stage.
Privacy Policy: Writing for Parents, Not Lawyers
One requirement that spans all three pillars — COPPA, GDPR, and app store policies — is the privacy policy. Every kids’ app needs one, and it needs to be written in language that parents can actually understand.
Toan worked with our privacy consultant to draft a privacy policy that avoids legal jargon. Here are the principles we followed:
Use plain language. Instead of “We may process personal data for the purposes of our legitimate interests,” write “We use your child’s lesson progress to show you how they’re doing and to adjust the difficulty of future lessons.”
Be specific about children’s data. Don’t lump children’s data practices in with general data practices. Have a dedicated “Children’s Privacy” section that clearly states what data you collect from children, why, how long you keep it, and how parents can control it.
Make it findable. The privacy policy must be linked from your app store listing, accessible from within the app (behind a parental gate if necessary), and available on your website. We link to it from three places: the App Store / Play Store listing, the parent settings screen, and our website footer.
Version it. When you update the privacy policy, keep the previous version accessible. Parents who consented under version 1.0 should be able to see what they consented to. We maintain a versioned archive and tie each consent record to a specific policy version.
Include contact information. Both COPPA and GDPR require that you provide a way for parents to contact you about their child’s data. We include a dedicated email address (privacy@kidspark.app) and respond to all inquiries within 72 hours, well within the 30-day GDPR requirement.
The Bottom Line
I started this post by describing Toan walking into the office with a stack of printouts about compliance fines. By the time you’ve read this far, I hope you understand why he was right to sound the alarm.
Compliance isn’t a checkbox. It’s not something you handle in the last sprint before launch. It’s not a legal team’s problem that engineers don’t need to think about. It’s a foundational architectural constraint that affects every data model, every API endpoint, every SDK choice, and every UI flow in your application.
Compliance isn’t the enemy of good product design. When Toan pushed us to minimize data collection, Hana found that simpler profiles with avatars were actually more engaging for children than complex profiles with photos. When we removed third-party analytics, we built a first-party system that gives us exactly the metrics we need without the noise of a general-purpose analytics platform. When we designed authentication without child PII, we created a login flow that’s genuinely easier for children to use.
Every kids’ app team needs a Toan. Someone who reads the regulations, who prints out the FTC enforcement actions, who asks “but have you thought about…” before every feature ships. Compliance paranoia isn’t paranoia when the fines start at $50,000 per violation and app store removal can happen overnight. It’s just good engineering.
The children and families who use KidSpark trust us with something precious — their kids’ attention, their learning data, their privacy. Building that trust requires more than a good privacy policy. It requires an architecture designed from the ground up to protect the people who can’t protect themselves.
In Part 7, we’ll cover our testing strategy — unit tests, widget tests, integration tests, accessibility testing, and how we test on real devices with real children (with appropriate parental consent, of course).
This is Part 6 of a 10-part series: Building KidSpark — From Idea to App Store.
Series outline:
- Why Mobile, Why Now — Market opportunity, team intro, and unique challenges of kids apps (Part 1)
- Product Design & Features — Feature prioritization, user journeys, and MVP scope (Part 2)
- UX for Children — Age-appropriate design, accessibility, and testing with kids (Part 3)
- Tech Stack Selection — Flutter vs React Native vs Native, architecture decisions (Part 4)
- Core Features — Lessons, quizzes, gamification, offline mode, parental controls (Part 5)
- Child Safety & Compliance — COPPA, GDPR-K, and app store rules for kids (this post)
- Testing Strategy — Unit, widget, integration, accessibility, and device testing (Part 7)
- CI/CD & App Store — Build pipelines, code signing, submission, and ASO (Part 8)
- Production — Analytics, crash reporting, monitoring, and iteration (Part 9)
- Monetization & Growth — Ethical monetization, growth strategies, and lessons learned (Part 10)