Atlas

119 results

Joyce, a Decentralized Approach to Foster Business Agility

Despite all of the tools and methodologies that have arisen in the last few years, many companies, particularly those that have been in the market for decades, struggle when it comes to leveraging their operational data to build new digital products and services. According to research and surveys conducted by McKinsey over the last few years, the success rate of digital transformations is consistently low, with less than 30% succeeding at improving their company’s performance. There are a lot of reasons for this, but most of them can be summarized in a sentence: A digital transformation is primarily an organizational and cultural change and then a technological shift. The question is not if digital transformation is a good thing nor is it if moving to the cloud is a good choice. Companies need (badly, in some cases) a digital transformation and yes, the pros of moving to the cloud usually overcome the cons. So, let’s try to dig deeper and analyze three of the main problems companies face when they go on this journey Digital products development Products by nature are customer-driven but companies run their businesses on multiple back-end systems that are instead purpose-driven. Unless you run a very small business, different people with different objectives have ownership of such products and systems. Given this context, what happens when a company wants to launch a new digital product at speed? The back-end systems (CRMs, E-commerce, ERP, etc.) hold the data they need to bring to the customer. Some systems are SaaS, some are legacy, and perhaps others are custom applications created by the company that disrupted the market with innovative solutions back in the days, the perfect recipe for integration hell. The product manager needs to coordinate and negotiate multiple change requests with the system’s owners whilst trying to convince them to add their needs in the backlog to meet the deadline. And things get even worse, as the new product relies on the computational power of the source systems, and if those systems cannot handle the additional traffic, both the product and the core services will be affected. Third-party integration “Everybody wants the change, (almost) nobody wants to change.” In this ever-growing digital world, partnering with third parties (whether they are clients or service providers) is crucial, but everyone who has tried to do so knows how challenging this is: non-standard interfaces, CSV files over FTP with fancy update rules, security issues… The list of unwanted things can grow indefinitely. SaaS everywhere The Software-as-a-Service model is extremely popular and getting the service you want without worrying about the underlying infrastructure gives freedom and speed of adoption, but what happens when a big company relies on multiple SaaS products to run their business? Sooner or later, they experience loss of control and higher costs in keeping a consistent view of the big picture. They need to deal with SaaS internal representations of their own data, multiple views of the same domain concept, unplanned expenses to export, and interpret and integrate the data from different sources with different formats. Putting it all together All the issues above fall into a well-known category of information technology. They are integration problems, and over the years, a lot of vendors promised a definitive solution. Now, you can consider low-code/no-code platforms with hundreds of ready-made connectors and modern graphical interfaces. Problem solved, right? Well, not really. Low-code integration platforms simplify implementation. They are really good at it, but doing so oversimplifies the real challenge: creating and maintaining a consistent set of APIs shaped around the business value over time, and preventing the interfaces from leaking internal complexities to the rest of the company, something that has to be defined and maintained through architectural choices and proper skills (completely hidden behind the selling points of such platforms). There are two different ways to solve integration problems: Centralized using adapters. In this case, the logic is pushed to the central orchestration component, with integration managed through a set of adapters. This is the rather old school SOA approach, the one that the majority of market integration platforms are built on. Decentralized, pushing the logic to the edges, giving autonomous teams the freedom to define both the boundaries and the APIs that a domain must expose to deliver business value. This is a more modern approach that has arisen recently alongside the rise of microservices and, in the analytical world, with the concept of data mesh. The former gives speed at the starting point and the illusion of reducing the number of choices and skills to manage the problems, but in the long run, inevitably, this begins to accumulate technical debt. Due to the lack of necessary degrees of freedom, you lose the ability to evolve the integration points over time, the same thing that caused the transition from SOA to microservices architectures. The latter needs the relevant skills, vision, and ability to execute but gives immediate results and allows you to flexibly manage the evolution of the enterprise architecture over time. Old problems, new solutions At Sourcesense in the last 20 years, we have partnered on hundreds of projects to bring agility, speed, and new open-source technology to our customers. Many times through the years, we were faced with the integration challenges above, and yes, we tried to solve them with the technology available at the time, so we have built some integration solutions on SOA (when they were the best of breed) and interacted with many of the integration platforms on the market. Then, we struggled with the issues and limitations of the integration landscape and have listened to our customers’ needs and where expectations have fallen short. The rise of agile methodologies, cloud computing, new techniques, technologies, and architectural styles has given an unprecedented boost to software evolution and the ability to support business needs, so we embraced the new wave and now have growing experience in solving problems with these tools. Along the way, we’ve seen a recurring pattern when we encountered integration problems, the effectiveness of data hubs as components of the enterprise architectures to solve these challenges, so we built one of our own: Joyce. Data hubs This is a relatively new term and refers to software platforms that collect data from different sources with the main purpose of distribution and sharing. Since this definition is broad and vague, let’s add some other key elements that matter and help define the contours of our implementation. Collecting data from different sources can bring three major benefits: Computational decoupling from the sources. Pulling (or pushing) the data out of the originating systems means that client applications and services interact with the hub and not directly with the sources, preventing them from being slowed down by additional traffic. Catalog and discoverability. If data is collected correctly, this leads to the creation of a catalog, allowing people inside the organization to search, discover, and use the data inside the hub. Security. The main purpose of the hubs is distribution and sharing. This leads immediately to focus on access control and security hardening. A single access point simplifies the overall security around the data because it significantly reduces the number of systems the clients have to interact with to gather the data they need. Joyce, how it works The cornerstone concept of Joyce is the schema. It allows you to shape the ingested data and how this data will be made available to client services. Using the same declarative approach made popular by Kubernetes, the schemas describe the expected result and the platform performs the actions to make it happen. Schemas are standard JSON schema files stored and classified in a catalog. Their definition falls into three categories: Input – how to gather and shape the source data. We leverage the Kafka Connect framework to provide ready-made connectors for a wide variety of sources. The ingested data can be filtered, formatted, and enriched with transformation handlers (domain-specific extensions of JSON schema). Model – allows you to create new aggregates from the data stored in the platform. This feature gives the freedom to model the data the way needed by client services. Export – bulk data export capability. Exported data can be any query run against the existing data with an optional temporal filter. Input and model data is made available to all the client services with the proper authorization grants through auto-generated REST and GraphQL APIs. It is also possible to subscribe to a dedicated topic if an event-driven approach is more suitable for the use-case. MongoDB: the key for a flexible model and performance at scale We heavily rely on MongoDB. Thanks to its flexibility, we can easily map any data structure the user defines to collect the data. Half of the schema definition is basically the definition of a MongoDB schema. (We also auto-generate one schema per collection to guarantee data integrity.) Joyce runs in a Kubernetes cluster and all its services are inherently stateless to exploit the full potential of horizontal scaling. The architecture is based on the CQRS pattern. This means that writes and reads are completely decoupled and can scale independently to meet the unique needs of the production environment. MongoDB is also the backing database of the API layer so we can keep the promise of low latency, high throughput, and continuous availability along all the components of the stack. The platform is available as a fully managed PaaS on the three major cloud providers (AWS, Azure, GCP) but if needed, it can be installed on an existing infrastructure (in cloud and on prem). Final considerations There are many challenges leaders must face for a successful digital transformation. They need to guide their organizations along a process that involves changes on many levels. The exponential growth of technological solutions in the last few years adds more complexity and confusion. The evolution of organizational models and methodologies point in the direction of shared responsibility, people empowerment, and autonomous teams with a light and effective central governance. The same evolution also permeates the novel approaches to enterprise architectures like the data mesh. Unfortunately, there’s no silver bullet, just the right choices for the given context. Despite all the marketing and hype around this or that one solution to all of your digital transformation needs, a long term successful shift needs guidance, competence and empowerment. We’ve built Joyce with the aim of reducing the burden of repetitive tasks and boilerplate code to get the results faster and catch the low hanging fruits without trying to replace the necessary architectural thinking to properly define the current state and the evolution of the enterprise architectures of our customers. If you’re struggling with the problems enlisted at the beginning of this article you should give Joyce a try. Learn more about Joyce

December 21, 2021

Introducing Pay as You Go MongoDB Atlas on AWS Marketplace

We’re excited to introduce a new way of paying for MongoDB Atlas . AWS customers can now pay Atlas charges via our new AWS Marketplace listing . Through this listing, individual developers can enjoy a simplified payment experience via their AWS accounts, while enterprises now have another way to procure MongoDB in addition to privately negotiated offers, already supported via AWS Marketplace. Previously, customers who wanted to pay via AWS Marketplace had to commit to a certain level of usage upfront. Pay as you go is available directly in Atlas via credit card, PayPal, and invoice — but not in AWS Marketplace, until today. With this new listing and integration, you can pay via AWS with no upfront commitments . Simply subscribe via AWS Marketplace and start using Atlas. You can get started for free with Atlas’s free-forever tier , then scale as needed. You’ll be charged in AWS only for the resources you use in Atlas, with no payment minimum. Deploy, scale, and tear down resources in Atlas as needed; you’ll pay just for the hours that you’re using them. Atlas comes with a Basic Support Plan via in-app chat. If you want to upgrade to another Atlas support plan , you can do so in Atlas. Usage and support costs will be billed together to your AWS account daily. If you’re connecting Atlas to applications running in AWS, or integrating with other AWS services , you’ll be able to see all your costs in one place in your AWS account. To get started with Atlas via AWS Marketplace, visit our Marketplace listing and subscribe using your account. You’ll then be prompted to either sign in to your existing Atlas account or sign up for a new Atlas account . Try MongoDB Atlas for Free Today!

December 15, 2021

MongoDB Atlas for Government Achieves "FedRAMP In-process"

We are pleased to announce that MongoDB Atlas for Government has achieved the FedRAMP designation of “ In-process ”. This status reflects MongoDB’s continued progress toward a FedRAMP Authorized modern data platform for the US Government. Earlier this year, MongoDB Atlas for Government achieved the designation of FedRAMP Ready . MongoDB is widely used across the Federal Government, including the Department of Veterans Affairs, the Department of Health & Human Services (HHS), the General Services Administration, and others. HHS is also sponsoring the FedRAMP authorization process for MongoDB. What is MongoDB Atlas for Government? MongoDB Atlas for Government is an independent environment of our flagship cloud product MongoDB Atlas. Atlas for Government has been built for US government needs. It allows federal, state, and local governments as well as educational institutions to build and iterate faster using a modern database-as-a-service platform. The service is available in AWS GovCloud (US) and AWS US East/West regions. MongoDB Atlas for Government Highlights: Atlas for Government clusters can be created in AWS GovCloud East/West or AWS East/West regions. Atlas for Government clusters can span regions within AWS GovCloud or within AWS. Atlas core features such as automated backups, AWS PrivateLink, AWS KMS, federated authentication, Atlas Search, and more are fully supported Applications can use client-side field level encryption with AWS KMS in GovCloud or AWS East/West. Getting started and pricing MongoDB Atlas for Government is available to Government customers or companies that sell to the US Government. You can buy Atlas for Government through AWS GovCloud or the AWS marketplace . Please fill out this form and a representative will get in touch with you. To learn more about Atlas for Government, visit the product page , check out the documentation , or read the FedRAMP FAQ .

September 22, 2021

Highlight What Matters with the MongoDB Charts SDK

We're proud to announce that with the latest release of the MongoDB Charts SDK you can now apply highlights to your charts. These allow you to emphasize and deemphasize your charts with our MongoDB query operators . Build a richer interactive experience for your customers by highlighting with the MongoDB Charts embedding SDK . By default, MongoDB Charts allows for emphasizing parts of your charts by series when you click within a legend. With the new highlight capability in the Charts Embedding SDK, we put you in control of when this highlighting should occur, and what it applies to. Why would you want to apply highlights? Highlighting opens up the opportunity for new experiences for your users. The two main reasons why you may want to highlight are: To show user interactions: We use this in the click handler sandbox to make it obvious what the user has clicked on. You could also use this to show documents affected by a query for a control panel. Attract the user’s attention: If there's a part of the chart you want your users to focus on, such as the profit for the current quarter or the table rows of unfilled orders. Getting started With the release of the Embedding SDK , we've added the setHighlight method to the chart object, which uses MQL queries to decide what gets highlighted. This lets you attract attention to marks in a bar chart, lines in a line chart, or rows in a table. Most of our chart types are already supported, and more will be supported as time goes on. If you want to dive into the deep end, we've added a new highlighting example and updated the click event examples to use the new highlighting API: Highlighting sandbox Click events sandbox Click events with filtering sandbox The anatomy of a click In MongoDB Charts, each click produces a wealth of information that you can then use in your applications , as seen below: In particular, we generate an MQL expression that you can use called selectionFilter , which represents the mark selected. Note that this filter uses the field names in your documents, not the channel names. Before, you could use this to filter your charts with setFilter , but now you can use the same filter to apply emphasis to your charts. All this requires is calling setHighlight on your chart with the selectionFilter query that you get from the click event, as seen in this sandbox . Applying more complex highlights Since we accept a subset of the MQL language for highlighting, it's possible to specify highlights which target multiple marks, as well as multiple conditions. We can use expressions like $lt and $gte to define ranges which we want to highlight. And since we support the logical operators as well, you can even use $and / $or . All the Comparison , Logical and Element query operators are supported, so give it a spin! Conclusion This ability to highlight data will make your charts more interactive and help you better emphasize the most important information in your charts. Check out the embedding SDK to start highlighting today! New to Charts? You can start now for free by signing up for MongoDB Atlas , deploying a free tier cluster and activating Charts. Have an idea on how we can make MongoDB Charts better? Feel free to leave an idea at the MongoDB Feedback Engine .

September 2, 2021

Fine-Tune Relevance in MongoDB Atlas Search with Function Scoring and Synonyms

MongoDB Atlas Search is an embedded full-text search solution in MongoDB Atlas that gives developers a seamless and scalable experience for building fast, relevance-based application features. We announced its general availability last year at MongoDB.live 2020 and over the past year we’ve introduced many new features, including a visual index builder, search query tester, custom analyzers , and wildcard path queries . This year at MongoDB.live 2021 , we’re excited to highlight two new capabilities that help developers tune the relevance of search results. See how easy it is to get started with MongoDB Atlas Search in this demo video by Marcus Eagan, Senior Product Manager for Atlas Search. Building relevance into search results Understanding the behavior of your users is essential when thinking about search result relevance. People don’t always tell you what they want, and they sometimes use words or phrases that don’t match your content exactly. To cover these scenarios, you can use full-text search features like function scoring and synonyms. Influence search rankings with function scoring There are often multiple factors that influence how search results should be ranked. For example, let’s say you have a restaurant finder application. The explicit inputs are things like the user’s location and what they’re searching for, but what’s implied is that they likely want to see highly rated restaurants or ones with more reviews. What’s Cooking: a sample restaurant finder application using MongoDB Atlas Search Function scoring allows you to influence the order of results returned by manipulating the score of each result. In Atlas Search, that means you can use a numeric field in a document and apply a mathematical expression to it. For example, you might want to increase the score of restaurants that are sponsored or have higher star ratings. This can easily be accomplished within the same search query by simply adding the function option to the score parameter of your query. Learn more about how to use function scores in our developer tutorial . Show results for more search queries with synonyms Synonyms are often used to define terms that are semantically similar to each other to improve search results. For example, someone searching for “noodles” might want to find results for “spaghetti”, “chow mein”, or “pad thai”. Synonyms can also help with typos, especially on mobile and small keyboards. In Atlas Search, you can define collections of synonyms for a search index via the API. Synonyms can be explicit (one-way) or equivalent (two-way). Explicit synonyms are good for defining relationships between terms that are subsets of each other, like the noodle example above: “spaghetti”, “chow mein”, and “pad thai” are all explicit synonyms for “noodles”, but not each other (you don’t want results for “chow mein” in a search for “spaghetti”). Equivalent synonyms are often used for terms that have regional variations or are otherwise interchangeable both ways, like soda and pop, or Kleenex and tissues. What's next for Atlas Search Developers are increasingly turning to full-text search to make content more discoverable and relevant for application end users. With Atlas Search, we hope to not only make building full-text search easier, but also more powerful and expressive. Join our community to ask questions and find out what other developers are building with Atlas Search and let us know what you think we should build next in our feedback forums .

July 13, 2021

Launched Today: MongoDB 5.0, Serverless Atlas, and the Evolution of our Application Data Platform

Today we welcome you to our annual MongoDB .Live developer conference. Through our keynote and conference sessions we'll show you all the improvements, new features, and exciting things we've been working on since last year’s conference. What I want to do in this blog post is provide you with a summary of what we are announcing, and resources to help you learn more. While it's easy to focus on what we are announcing at this year's event, we actually started out on this journey 12 years ago by releasing the world’s most intuitive and productive database technology to develop with — MongoDB. And we believe the applications of the NEXT 10 YEARS will be built on data architectures that continue to optimize for the developer experience, allowing teams like yours to innovate at speed and scale. So how are we building on this vision? Today I am incredibly proud to announce three big things: The General Availability (GA) of MongoDB 5.0, the latest generation of our core database. It includes native support for time series workloads, new ways to future-proof your applications, multi-cloud privacy controls, along with a host of other improvements and new features. The preview release of serverless instances on MongoDB Atlas, which makes it even easier for development teams who don’t want to think about capacity management at all to get the database resources they need quickly and efficiently. Major enhancements to Atlas Data Lake, Atlas Search, and Realm Sync, which allow engineering teams to reduce architectural complexity and get more value out of their data by leveraging a unified application data platform. MongoDB 5.0 GA MongoDB 5.0 is the latest generation of the database most wanted by developers . Our new release makes it even easier to support a broader range of workloads, introduces new ways of future-proofing your apps, and further enhances privacy and security. This major jump in version number from MongoDB 4.4 – our prior GA version – to 5.0 reflects a new era for MongoDB's release cadence: We want to get new features and improvements into your hands faster. Starting with MongoDB 5.0, we will be publishing new Rapid Releases every quarter, which will roll up into Major Releases once a year for those of you that want to maintain the existing annual upgrade cadence. You can learn more about the new MongoDB release cadence from our blog post published last October. Digging into MongoDB 5.0, here is what’s new and improved: Native Time Series Designed for IoT and financial analytics, our new time series collections, clustered indexing, and window functions make it easier, faster, and lower cost to build and run time series applications, and to enrich your enterprise data with time series measurements. MongoDB automatically optimizes your schema for high storage efficiency, low latency queries, and real-time analytics against temporal data. Running your time series applications on our application data platform eliminates the time and the complexity of having to stitch together multiple technologies yourself. You can manage the entire time series data lifecycle in MongoDB – from ingestion, storage, querying, real-time analysis, and visualization through to online archiving or automatic expiration as data ages. Time series collections can sit right alongside regular collections in your MongoDB database, making it really easy to combine time series data with your enterprise data within a single versatile, flexible database – using a single query API to power almost any class of workload. Our new time-series collections blog post gives you everything you need to get started. Future-proof with the Versioned API and Live Resharding Starting with MongoDB 5.0, the Versioned API future-proofs your applications. You can fearlessly upgrade to the latest MongoDB releases without the risk of introducing backward-breaking changes that require application-side rework. Using the new versioned API decouples your app lifecycle from the database lifecycle, so you only need to update your application when you want to introduce new functionality, not when you upgrade the database. Future-proofing doesn’t end with the Versioned API. MongoDB 5.0 also introduces Live Resharding which allows you to easily change the shard key for your collections on demand – with no database downtime – as your workload grows and evolves. The way I like to think about this is that we’ve extended the flexibility the document model has always given you down to how you distribute your data. So as things change, MongoDB adapts without expensive schema or sharding migrations. Next-Gen Privacy & Security MongoDB’s unique Client-Side Field Level Encryption now extends some of the strongest data privacy controls available anywhere to multi-cloud databases. And with the ability in 5.0 to reconfigure your audit log filters and rotate x509 certificates without downtime you maintain a strict security posture with no interruption to your applications. Run MongoDB 5.0 Anywhere MongoDB 5.0 is available today as a fully-managed service in Atlas . You can of course also download and run MongoDB 5.0 on your own infrastructure, either with the community edition of MongoDB, or with MongoDB Enterprise Advanced . The Enterprise Advanced offering provides sophisticated operational tooling via Ops Manager, advanced security controls, proactive 24x7 support, and more. MongoDB Ops Manager 5.0 enhancements include: Support for the automation, monitoring, and backup/restore of MongoDB 5.0 deployments. Improved load performance with parallelized client-side restores. A quick start experience for deploying MongoDB in Kubernetes with Ops Manager. And lastly, a guided Atlas migration experience that walks users through provisioning a migration host to push data from their existing environment into the fully managed Atlas cloud service. You can learn more about MongoDB 5.0 from our What’s New guide . New to MongoDB Atlas — Serverless Instances (Preview) We want developers to be able to build MongoDB applications without having to think about database infrastructure or capacity management. With serverless instances on MongoDB Atlas, now available in Preview, you can automatically get the database resources you need based on your workload demand. It’s really simple: the only decision you need to make is the cloud region hosting your data. After that, you’ll get an on-demand database endpoint that dynamically adapts to your application traffic. Serverless instances will support the latest MongoDB 5.0 GA release, Versioned API, and upcoming Rapid Releases so you never have to worry about backwards compatibility or upgrades. Pay only for reads and writes your application performs and the storage resources you use (up to 1TB of storage in preview) and leave capacity management to MongoDB Atlas’s best-in-class automation. We invite you to try it out today with a new or existing Atlas account. And the Preview release is just the beginning – we will be working with partners such as Vercel and Netlify to deliver an integrated serverless development experience in the coming months. In the longer term, we will continue to evolve our cloud-native backend architecture to abstract and automate even more infrastructure decisions and optimizations to deliver the best database experience on the market. The New MongoDB Shell GA The new MongoDB Shell has been redesigned from the ground up to provide a modern command-line experience with enhanced usability features and powerful scripting environment. It makes it even easier for users to interact and manage their MongoDB data platform, from running simple queries to scripting admin operations. A great user experience, even on a command-line tool, should always be a major consideration. With the new MongoDB Shell we have introduced syntax highlighting, intelligent auto-complete, contextual help and useful error messages creating an intuitive, interactive experience for MongoDB users. Check out this blog post for more information. MongoDB Charts and Atlas Data Lake: Better Together MongoDB Charts intuitive UI and ability to quickly create & share charts and graphs of JSON data is now integrated with Atlas Data Lake . You can now easily visualize JSON data stored in Amazon AWS S3 without any data movement, duplication or transformation. Furthermore, you can run Atlas Data Lake’s federated query to blend data across multiple Atlas databases and AWS S3, and visualize the results with Charts. By adding Atlas Data Lake as a data source in Charts, you can discover deeper, more meaningful insights in real time. Check out this blog post for more information. Atlas Search — More Relevance Features It’s incredibly important for modern applications to deliver fast and relevant search functionality: it powers discoverability and personalization of content, which in turn drives user engagement and retention. Atlas Search , which delivers powerful full-text search functionality without the need for a separate search engine, has several new capabilities for building rich end user experiences. We’ve recently added support for function scoring, which allows teams to apply mathematical formulas on fields within documents to influence their relevance, such as popularity or distance — e.g. closer restaurants with more or better reviews will show up higher in a list of results. In addition, you can now define collections of synonyms for a particular search index. By associating semantically equivalent terms with each other, you can respond to a wider range of user-initiated queries in your applications. Realm Realm lets you have simple, powerful local persistence on mobile phones, tablets and IoT devices like Raspberry Pi. The Realm SDKs provide a set of APIs that let developers store and interact with native objects directly, reducing the amount of code required as there is no need for ORMs or learning cryptic database syntax. In addition, we made MongoDB Realm Sync generally available earlier this year, making it easy to synchronize data between local storage on your devices and MongoDB Atlas on the backend. No need to worry about networking code or dealing with conflict resolution as we handle all of that for you. Today, we’re excited to announce support for Unity. You can now use Realm to store your game data, like scores and player state, and sync it automatically across devices. Realm's support for Unity is now Generally Available and ready for production workloads. We're also investing in support for more cross-platform frameworks — the Kotlin Multiplatform and Flutter/Dart SDKs are now both available in Alpha. And finally, the team is working towards Realm Flexible Sync, a new way to synchronize data with more granular control. Flexible Sync will allow you to — Build applications that respond dynamically to user's needs. Let your end users decide what data they need, and when. Use more precise permissions that can adapt over time. Check out this dedicated blog on our upcoming plans for Flexible Sync to learn more. Getting Started With everything we announced today, you can imagine it was a packed keynote! And there is so much more that we didn’t cover. You can get all of the highlights from our new announcements page where you will also find all the resources you need to get started.

July 13, 2021

Visualize Blended Atlas and AWS S3 Data From Atlas Data Lake with MongoDB Charts

We’re excited to announce that MongoDB Charts supports Atlas Data Lake as a data source! You can now use Charts to easily visualize data stored across different Atlas databases and AWS S3 buckets. Thanks to the aggregating power of Atlas Data Lake’s federated query, creating charts and graphs from blended application and cloud object data is simpler than ever before. On the surface this powerful integration is as simple as adding your Atlas Data Lake as a data source within Charts. However, it unlocks a deeper level of analysis while eliminating the need for creating an Extract-Transform-Load (ETL) process across your Atlas and S3 data. The integration provides the ability to visualize data from the following combination of sources without writing any code: Data from many Atlas databases or clusters, including multi-cloud clusters Cloud storage data from AWS S3 Blended Atlas and cloud storage (AWS S3) data Scenario: Finding insights from aggregated customer profile and contract data Let’s add a real world scenario of how this can enhance the analytics you derive from your data. While doing so, we will walk through the steps of setting up your Atlas Data Lake, adding it as a data source to Charts, and getting the most of your data with Charts’ powerful visualization capabilities. For context, let’s imagine we’re an analyst at a telecom company and we have contract data that is stored in MongoDB Atlas in different clusters and databases for each country we operate in - United States and Canada. Second, we have offloaded data from our Customer Relationship Management (CRM) tool as a parquet file into an AWS S3 bucket. All three datasets share a common “customerID” field. Configure Atlas Data Lake Because both “contracts” collections (or datasets) in MongoDB Atlas share the same fields, I simply mapped both into a single collection within the data lake. I mapped the customer profiles dataset into its own collection, since it only shares the “customerID” field. However, now that it’s in the same data lake, I will easily be able to join it to my contract data with a $lookup in my Charts aggregation pipeline or with a Lookup Field in the chart builder. (A $lookup in the MongoDB Query API is equivalent to a join in SQL.) Configure Charts data source I want to find insights from all contracts, both US and Canada in this scenario. Once I have created a single Atlas Data Lake collection (DL_contracts.allcontracts) from the two separate databases, I then need to add it as a data source in Charts. Simply click on “add data source” within Charts and add your data lake, and then choose the collections we want to use in the next step. For completeness I also added the two Atlas collections (US and Canada contracts) as data sources in Charts by following the same steps. Visualize data across multiple Atlas databases With Atlas Data Lake’s federated query capability, which effectively performs a union of data, I am able to build a column chart that shows the amount of all US and CA contracts in a single chart without writing any code. As you can see below, the chart shows both US and CA columns when connected to the data lake collection. When the data source is switched directly to either Atlas database, it only shows data for that respective database, or country in this example. Visualize blended data from Atlas and an AWS S3 bucket Lastly, let’s take our insights to the next level by visualizing data from multiple Atlas databases and a parquet file that’s stored in an AWS S3 bucket. Adding customer profile data that I offloaded from my CRM tool into S3 will enable me to find more robust insights. I could also visualize the data from the parquet file alone by connecting to that data lake collection. Since the contract data and customer profile data are in different collections within my Atlas Data Lake, I created a $lookup in the aggregation pipeline of the Charts data source. I then created a table chart from three different data sources with conditional formatting to quickly identify high value customers. The columns with blue boxes include contract data from both Atlas clusters, while the columns with orange boxes include customer profile data from a parquet file via AWS S3 bucket. Note, I could also aggregate the data in Atlas Data Lake and use $out to create a new collection of the data , and then connect Charts to the new collection as a data source. For the purposes of this blog, I wanted to highlight Charts-specific aggregation capabilities. We hope that you’re excited about the ability to easily visualize multiple data sources, from multiple Atlas databases to AWS S3 buckets in one place! Remember, if you haven’t used Charts before, you can get started for free by signing up for MongoDB Cloud , deploying an Atlas cluster and activating Charts. Try MongoDB Atlas for free today!

July 9, 2021

MongoDB Atlas Celebrates Five Years of Innovation in Data

Today we’re thrilled to celebrate the five-year anniversary of Atlas, MongoDB’s multi-cloud database platform. When we launched Atlas in 2016, we couldn’t have foreseen the impact it would have on both our customers and MongoDB as a company. MongoDB Atlas has allowed us to become an ever more trusted partner to our customers, playing a key role in their efforts to manage and mitigate risk. One of the insights that spurred the development of Atlas — that it should be easy to move your data into, out of, and between clouds — remains as groundbreaking today as it was five years ago. Thanks to our commitment to innovation, reliability, and security, Atlas’s customers include well-known companies such as Forbes , Toyota Material Handling , Pitney Bowes , and 7-Eleven . Sixty of the Fortune 100 and many of the world’s most innovative disruptors rely on Atlas to help them grow, become more efficient, gain insights from their data, and create superior customer experiences. Atlas has transformed MongoDB into a cloud-first company: Atlas’s revenue is growing at 73 percent a year and currently accounts for more than half of MongoDB’s revenue. We got a glimpse of the future in 2009 with the first production version of MongoDB. For years, developers struggled to build modern applications on top of decades-old relational databases. Using a JSON-based document model, MongoDB was exceptionally fast, scalable, flexible, and intuitive for developers. It was unusually proficient with both structured and unstructured data. The database’s very design pushed developers and engineers to think differently about how they worked with data. As our customers investigated new business models enabled by the cloud, we noticed two things about the way they were working with data. First, they were increasingly choosing to use MongoDB in the cloud rather than hosting it on premises. Second, our customers were gravitating toward managed services. If our customers were going to the cloud — and they were — we needed to be there too. Our goal was not only to become a cloud-native database, but also to provide developers with a superior platform so they could change the world with data — and employ all the cloud’s potential to do it. As a managed service, Atlas would free developers from the overhead of managing MongoDB themselves. By making data portable across the biggest public clouds, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure, it would free data from vendor lock-in. Developers and engineering teams could match the right cloud to the right workload in a few clicks and a few minutes without writing or rewriting code. Since Atlas shipped in 2016, it has gained more than 25,000 customers. We’ve dramatically improved its functionality, turning it into an even more powerful tool to fulfill our mission of making data stunningly easy to use. At its 2016 launch, Atlas was available in four AWS regions. The next year, we introduced cross-region replication. Within a single cloud, a customer could now enable cross-region deployments for even better availability guarantees. We launched global clusters in 2018, which made it easier to place data closer to the user and enabled our customers’ applications to run faster all over the world. MongoDB Atlas is now available in about 80 regions across AWS, Microsoft Azure, and GCP. Atlas customers can switch clouds as business requirements change to take advantage of pricing changes and make the best use of each cloud’s capabilities. Our broad reach also helps our customers comply with increasingly complex data sovereignty and localization requirements. We’ve pushed Atlas well beyond the ability to serve larger numbers of regions. In 2019 we acquired Realm , which makes data accessible no matter where your user is or how lousy their connectivity may be. The MongoDB Realm Mobile Database enables simple, powerful persistence on mobile devices so apps can work offline as well as they do online. Two years later, we released MongoDB Realm Sync, making it even easier to keep data in sync across users and devices and connect to the Atlas database on the backend. We also released MongoDB Atlas Search, which enables developers to create rich, relevance-based search without moving their data into a separate search engine. That was accompanied by MongoDB Atlas Data Lake, enabling developers to use federated queries to analyze data across tiers. In 2019 we introduced client-side field-level encryption, an industry-leading approach to security. It’s relatively common to encrypt data at rest or in transit, but client-side field-level encryption encrypts data while it’s in use. There’s no additional code for developers to write and no significant impact on performance, and applications can still query data. Client-side field-level encryption enables our clients to use managed services in the cloud with more confidence, because even those who support the underlying cloud infrastructure cannot decrypt the data. It also makes it easier to comply with increasingly common “right to be forgotten” mandates in contemporary privacy legislation. A user can be forgotten simply by destroying the associated encryption key, making their data unreadable and irrecoverable. In 2020, our development teams accomplished what had been their mission since the inception of Atlas: multi-cloud clusters. With multi-cloud clusters, MongoDB Atlas goes well beyond its promise — fulfilled years earlier — to work equally well in any of the public clouds. Multi-cloud clusters enable a single cluster to be in multiple clouds simultaneously, making it trivial to move data between them. We have big plans for the next five years, and we’re already getting started. Soon we’ll preview serverless instances on MongoDB Atlas, making it even easier for development teams to get the capacity they need, when they need it. You choose the region that hosts your data and we’ll do the rest, with an on-demand database endpoint that dynamically adapts to your application traffic. We’re also making it easier to support a broader range of workloads, offering new ways to future-proof apps, and continuing to improve security and privacy capabilities. We’ll be making major enhancements to Atlas Data Lake, Atlas Search, and Realm Sync, all of which reduce architectural complexity and allow our customers to get more value from their data with a unified application data platform. And we’ll be doing it all on an accelerated cadence. Starting with MongoDB 5.0, we’ll publish new releases every quarter for those who want to be on the fast track, and then rolling those up into annual Major Releases for those who want to stay on the current cycle. We expect the next five years to be just as exciting and innovative as the past five — if not more. We can’t wait to see you there. For more on Atlas's Five Year Anniversary, check out this video

June 28, 2021