From 24e6ec3dcac15a982106439a73c13d6b20a38a58 Mon Sep 17 00:00:00 2001 From: Fenil Mehta <42742240+fenilgmehta@users.noreply.github.com> Date: Tue, 2 Jan 2024 23:36:45 +0530 Subject: [PATCH] docs: fix spelling and grammar (#26381) --- RELEASING/release-notes-2-0/README.md | 2 +- docs/docs/frequently-asked-questions.mdx | 4 ++-- docs/docs/installation/event-logging.mdx | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/RELEASING/release-notes-2-0/README.md b/RELEASING/release-notes-2-0/README.md index 265dd3294..617159100 100644 --- a/RELEASING/release-notes-2-0/README.md +++ b/RELEASING/release-notes-2-0/README.md @@ -34,7 +34,7 @@ Superset 2.0 is a big step forward. This release cleans up many legacy code path - New GitHub workflow to test Storybook Netlify instance nightly ([#19852](https://github.com/apache/superset/pull/19852)) -- Minimum requirement for Superset is now Python 3.8 ([#19017](https://github.com/apache/superset/pull/19017) +- Minimum requirement for Superset is now Python 3.8 ([#19017](https://github.com/apache/superset/pull/19017)) ## Features diff --git a/docs/docs/frequently-asked-questions.mdx b/docs/docs/frequently-asked-questions.mdx index 3007584ab..8c9fa034c 100644 --- a/docs/docs/frequently-asked-questions.mdx +++ b/docs/docs/frequently-asked-questions.mdx @@ -154,7 +154,7 @@ Table schemas evolve, and Superset needs to reflect that. It’s pretty common i dashboard to want to add a new dimension or metric. To get Superset to discover your new columns, all you have to do is to go to **Data -> Datasets**, click the edit icon next to the dataset whose schema has changed, and hit **Sync columns from source** from the **Columns** tab. -Behind the scene, the new columns will get merged it. Following this, you may want to re-edit the +Behind the scene, the new columns will get merged. Following this, you may want to re-edit the table afterwards to configure the Columns tab, check the appropriate boxes and save again. ### What database engine can I use as a backend for Superset? @@ -220,7 +220,7 @@ and write your own connector. The only example of this at the moment is the Drui is getting superseded by Druid’s growing SQL support and the recent availability of a DBAPI and SQLAlchemy driver. If the database you are considering integrating has any kind of of SQL support, it’s probably preferable to go the SQLAlchemy route. Note that for a native connector to be possible -the database needs to have support for running OLAP-type queries and should be able to things that +the database needs to have support for running OLAP-type queries and should be able to do things that are typical in basic SQL: - aggregate data diff --git a/docs/docs/installation/event-logging.mdx b/docs/docs/installation/event-logging.mdx index e6b0f8b35..f5dcb53c8 100644 --- a/docs/docs/installation/event-logging.mdx +++ b/docs/docs/installation/event-logging.mdx @@ -56,5 +56,5 @@ from superset.stats_logger import StatsdStatsLogger STATS_LOGGER = StatsdStatsLogger(host='localhost', port=8125, prefix='superset') ``` -Note that it’s also possible to implement you own logger by deriving +Note that it’s also possible to implement your own logger by deriving `superset.stats_logger.BaseStatsLogger`.