Is it dangerous to utilize Firestore for a social commerce website?

Hello everyone, I’m developing a web adaptation of my mobile app, which operates as a social commerce platform. I’m currently using Firestore as my database, but I’m worried that publicly displaying data might attract numerous SEO crawlers, scrapers, and bots. This influx could potentially trigger a significant increase in read operations and cloud function invocations, leading to unexpected Firebase charges. Does anyone have advice on how to address these concerns or alternative strategies to prevent excessive billing?

Hey there, Max_31Surf! I got a similar scenario in one of my projects and even though Firestore is pretty robust, the real trick is in how you manage data access. For instance, I had to be really mindful of unintentional read spikes from automated scrapers. I ended up experimenting with caching layers and even some custom security rules to ensure that my endpoints wouldn’t get hammered by bots. It’s like creating a little moat around your data! I’m curious though—how are you currently dealing with authentication and data device differentiation? Have you played around with rate limiting or something similar to throttle the unexpected reads? Sometimes a tiny bit of extra logic in your Cloud Functions can help filter out those unwanted requests. What kind of monitoring or alerts are you using to catch unusual patterns early? I find that diving into those analytics can really point you in the right direction, and maybe you’ll find some unexpected behavioral trends. Any particular strategies you’ve already tried or are considering? Would love to hear more about your approach!

Firestore has proven reliable in projects akin to social commerce platforms, provided you optimize data access. In my experience, revisiting your security rules thoroughly can deter many automated reads. A focused caching strategy significantly reduced unintentional Firebase charges in a similar context. While rate limiting through Cloud Functions may not completely prevent bot access, it does help mitigate excessive reads from scrapers. Regular monitoring paired with detailed analytics has been critical in spotting unusual patterns early, thereby preventing bill surprises.

Hey Max, I really get where you’re coming from. I’ve been tinkering with similar challenges where Firestore’s ease-of-use can sometimes hide underlying issues with uncontrolled reads. I’ve been toying with the idea of using a middleware layer that acts as a gatekeeper, sort of filtering out superfluous requests before they hit your DB, which could be really useful in reducing those unexpected spikes. Have you ever thought about leveraging a mix of client-side caching and perhaps a lightweight API proxy to keep the bots at bay? I’ve seen some folks experiment with that approach and it opened some interesting discussions about rate control on the server-side as well. How do you feel about integrating a separate layer for public content or maybe even exploring tools that simulate rate limiting at the network edge? It would be awesome to hear how you think such strategies might mesh with your current setup. Cheers and happy coding!

hey max, i used firestore too and found that adding loose secuirty and caching works well. bots might sneak in but basic filtering cuts most. keep a monitering eye to catch early anomalies, it helped me avoid surprise charges in my project.