docs: various adjustments across the docs (#29093)

Co-authored-by: Evan Rusackas <evan@preset.io>
Co-authored-by: John Bodley <4567245+john-bodley@users.noreply.github.com>
This commit is contained in:
Michael Holthausen 2024-06-05 20:53:08 +02:00 committed by GitHub
parent b5d9ac0690
commit de3a1d87b3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 14 additions and 14 deletions

View File

@ -166,7 +166,7 @@ WEBDRIVER_BASEURL_USER_FRIENDLY = "http://localhost:8088"
```
You also need
to specify on behalf of which username to render the dashboards. In general dashboards and charts
to specify on behalf of which username to render the dashboards. In general, dashboards and charts
are not accessible to unauthorized requests, that is why the worker needs to take over credentials
of an existing user to take a snapshot.

View File

@ -197,7 +197,7 @@ for production use._
If you're not using Gunicorn, you may want to disable the use of `flask-compress` by setting
`COMPRESS_REGISTER = False` in your `superset_config.py`.
Currently, Google BigQuery python sdk is not compatible with `gevent`, due to some dynamic monkeypatching on python core library by `gevent`.
Currently, the Google BigQuery Python SDK is not compatible with `gevent`, due to some dynamic monkeypatching on python core library by `gevent`.
So, when you use `BigQuery` datasource on Superset, you have to use `gunicorn` worker type except `gevent`.
## HTTPS Configuration

View File

@ -176,7 +176,7 @@ start Python in the Superset application container or host environment and try t
directly to the desired database and fetch data. This eliminates Superset for the
purposes of isolating the problem.
Repeat this process for each different type of database you want Superset to be able to connect to.
Repeat this process for each type of database you want Superset to connect to.
### Database-specific Instructions
@ -830,7 +830,7 @@ You should then be able to connect to your BigQuery datasets.
To be able to upload CSV or Excel files to BigQuery in Superset, you'll need to also add the
[pandas_gbq](https://github.com/pydata/pandas-gbq) library.
Currently, Google BigQuery python sdk is not compatible with `gevent`, due to some dynamic monkeypatching on python core library by `gevent`.
Currently, the Google BigQuery Python SDK is not compatible with `gevent`, due to some dynamic monkeypatching on python core library by `gevent`.
So, when you deploy Superset with `gunicorn` server, you have to use worker type except `gevent`.

View File

@ -43,8 +43,8 @@ running a custom auth postback endpoint), you can add the endpoints to `WTF_CSRF
2. Create database w/ ssh tunnel enabled
- With the feature flag enabled you should now see ssh tunnel toggle.
- Click the toggle to enables ssh tunneling and add your credentials accordingly.
- Superset allows for 2 different type authentication (Basic + Private Key). These credentials should come from your service provider.
- Click the toggle to enable SSH tunneling and add your credentials accordingly.
- Superset allows for two different types of authentication (Basic + Private Key). These credentials should come from your service provider.
3. Verify data is flowing
- Once SSH tunneling has been enabled, go to SQL Lab and write a query to verify data is properly flowing.

View File

@ -117,11 +117,11 @@ its metadata database. In production, this database should be backed up. The de
with docker compose will store that data in a PostgreSQL database contained in a Docker
[volume](https://docs.docker.com/storage/volumes/), which is not backed up.
Again **DO NOT USE THIS FOR PRODUCTION**
Again, **DO NOT USE THIS FOR PRODUCTION**
:::
You should see a wall of logging output from the containers being launched on your machine. Once
You should see a stream of logging output from the containers being launched on your machine. Once
this output slows, you should have a running instance of Superset on your local machine! To avoid
the wall of text on future runs, add the `-d` option to the end of the `docker compose up` command.

View File

@ -9,13 +9,13 @@ version: 1
## Docker Compose
First make sure to wind down the running containers in Docker Compose:
First, make sure to shut down the running containers in Docker Compose:
```bash
docker compose down
```
Then, update the folder that mirrors the `superset` repo through git:
Next, update the folder that mirrors the `superset` repo through git:
```bash
git pull origin master

View File

@ -4,14 +4,14 @@ hide_title: false
sidebar_position: 2
---
**Ready to give Apache Superset a try?** This quickstart guide will help you
**Ready to try Apache Superset?** This quickstart guide will help you
get up and running on your local machine in **3 simple steps**. Note that
it assumes that you have [Docker](https://www.docker.com),
[Docker Compose](https://docs.docker.com/compose/), and
[Git](https://git-scm.com/) installed.
:::caution
While we recommend using `Docker Compose` for a quick start in a sandbox-type
Although we recommend using `Docker Compose` for a quick start in a sandbox-type
environment and for other development-type use cases, **we
do not recommend this setup for production**. For this purpose please
refer to our

View File

@ -137,10 +137,10 @@ Next, within the **Query** section, remove the default COUNT(\*) and add Cost, k
SUM aggregate. Note that Apache Superset will indicate the type of the metric by the symbol on the
left hand column of the list (ABC for string, # for number, a clock face for time, etc.).
In **Group by** select **Time**: this will automatically use the Time Column and Time Grain
In **Group by**, select **Time**: this will automatically use the Time Column and Time Grain
selections we defined in the Time section.
Within **Columns**, select first Department and then Travel Class. All set lets **Run Query** to
Within **Columns**, first select Department and then Travel Class. All set lets **Run Query** to
see some data!
<img src={useBaseUrl("/img/tutorial/tutorial_pivot_table.png" )} />