COVID-19 PubSeq: Public SARS-CoV-2 Sequence Resource

public sequences ready for download!

May 2021 update: we are now at 86,377 sequences with normalized metadata on AWS OpenData!

Edit text!

COVID-19 PubSeq Uploading Data (part 3)

1 Introduction

In this document we explain how to upload data into COVID-19 PubSeq. This can happen through a web page, or through a command line script. We'll also show how to parametrize uploads by using templates. The procedure is much easier than with other sequence repositories and PubSeq uploads can be fully automated. Once uploaded you can use our export API to prepare for other repositories, including GenBank.

2 Uploading data

PubSeq allows you to upload your SARS-Cov-2 strains to a public resource for global comparisons. A recompute of the pangenome gets triggered on upload. Read the ABOUT page for more information.

3 Step 1: Upload sequence

To upload a sequence with the web upload page hit the File browse button and select the FASTA file on your local hard disk.

We start with an assembled or mapped sequence in FASTA format. The PubSeq uploader contains a QC step which checks whether it is a likely SARS-CoV-2 sequence. While PubSeq deduplicates sequences and never overwrites metadata, you may still want to check whether your data already is in the system by querying some metadata as described in Query metadata with SPARQL or by simply downloading and checking one of the files on the download page. We find GenBank MT536190.1 has not been included yet. A FASTA text file can be downloaded to your local disk and uploaded through our web upload page. Make sure the file is correct FASTA and does not include any HTML!

Note: we currently only allow FASTA uploads. In the near future we'll allow for uploading raw sequence files. This is important for creating an improved pangenome.

4 Step 2: Add metadata

The web upload page contains fields for adding metadata. Metadata is important for analysis. The metadata is available for queries, see Query metadata with SPARQL, and can be used to annotate variations of the virus in different ways. Metadata also includes attribution details.

A number of fields are obligatory: sample id, date, location, technology and authors. The others are optional, but it is valuable to enter them when information is available. Metadata is defined in this schema. From this schema we generate the input form. Note that optional fields have a question mark in the type. You can add code for metadata yourself because this is a public resource! See also Modify metadata for more information. The fields are:

4.1 Obligatory fields

4.1.1 Sample ID (sample_id)

This is a string field that defines a unique sample identifier by the submitter. In addition to sample_id we also have host_id, provider and submitter_sample_id where host is the host the sample came from, provider sample is the institution id and submitter is the submitting individual id. host_id is important when multiple sequences come from the same host. Make sure not to have spaces in the sample_id.

Here we add the GenBank ID MT536190.1.

4.1.2 Collection date

Estimated collection date. The GenBank page says April 6, 2020.

4.1.3 Collection location

A search on wikidata says Los Angeles is

4.1.4 Sequencing technology

GenBank entry says Illumina, so we can fill that in

4.1.5 Authors

GenBank entry says 'Lamers,S., Nolan,D.J., Rose,R., Cross,S., Moraga Amador,D., Yang,T., Caruso,L., Navia,W., Von Borstel,L., Hui Zhou,X., Freehan,A. and Garcia-Diaz,J.', so we can fill that in.

4.2 Optional fields

All other fields are optional. But let's see what we can add.

4.2.1 Host information

Sadly, not much is known about the host from GenBank. A little sleuthing renders an interesting paper by some of the authors titled SARS-CoV-2 is consistent across multiple samples and methodologies which dates after the sample, but has no reference other than that the raw data came from the SRA database, so it probably does not describe this particular sample. We don't know what this strain of SARS-Cov-2 did to the person and what the person was like (say age group).

4.2.2 Collecting institution

We can fill that in.

4.2.3 Specimen source

We have that: nasopharyngeal swab

4.2.4 Source database accession

Genbank which is Note we plug in our own identifier MT536190.1.

4.2.5 Strain name


5 Step 3: Submit to COVID-19 PubSeq

Once you have the sequence and the metadata together, hit the 'Add to Pangenome' button. The data will be checked, submitted and the workflows should kick in!

5.1 Trouble shooting

Ooops. We got an error saying: {"stem": "",… which means that our location field was not formed correctly! After fixing it to look like (note http instead on https and entity instead of wiki) the submission went through. Reload the page (it won't empty the fields) to re-enable the submit button.

6 Step 4: Check output

The current pipeline takes some time to complete! Once it completes the updated data can be checked on the DOWNLOAD page. After completion of above output this SPARQL query shows some of the metadata we put in.

7 Bulk sequence uploader

Above steps require a manual upload of one sequence with metadata. What if you have a number of sequences you want to upload in bulk? For this we have a command line version of the uploader that can directly submit to COVID-19 PubSeq. It accepts a FASTA sequence file an associated metadata in YAML format. The YAML matches the web form and gets validated from the same schema looks. The YAML that you need to create/generate for your samples looks like

A minimal example of metadata looks like

id: placeholder



    sample_id: XX
    collection_date: "2020-01-01"


    sample_sequencing_technology: []

    authors: [John Doe]

a more elaborate example (note most fields are optional) may look like

id: placeholder

    host_id: XX1
    host_age: 20
    host_treatment: Process in which the act is intended to modify or alter host status (Compounds)
    host_vaccination: [vaccines1,vaccine2]
    additional_host_information: Optional free text field for additional information

    sample_id: Id of the sample as defined by the submitter
    collector_name: Name of the person that took the sample
    collecting_institution: Institute that was responsible of sampling
    specimen_source: [,]
    collection_date: "2020-01-01"
    sample_storage_conditions: frozen specimen
    source_database_accession: []
    additional_collection_information: Optional free text field for additional information

    virus_strain: SARS-CoV-2/human/CHN/HS_8/2020

    sample_sequencing_technology: [,]
    alignment_protocol: Protocol used for assembly
    sequencing_coverage: [70.0, 100.0]
    assembly_method: ""
    additional_technology_information: Optional free text field for additional information

    authors: [John Doe, Joe Boe, Jonny Oe]
    submitter_name: [John Doe]
    submitter_address: John Doe's address
    originating_lab: John Doe kitchen
    lab_address: John Doe's address
    provider: XXX1
    submitter_sample_id: XXX2
    publication: PMID00001113
    submitter_orcid: [,]
    additional_submitter_information: Optional free text field for additional information

more metadata is yummy when stored in RDF. Yummydata is useful to a wider community. Note that many of the terms in above example are URIs, such as host_species: We use web ontologies for these to make the data less ambiguous and more FAIR. Check out the option fields as defined in the schema. If it is not listed, check the labels.ttl file. Also, a little bit of web searching may be required or contact us.

7.1 Run the uploader (CLI)

Installing with pip you should be able to run

bh20sequploader sequence.fasta metadata.yaml

Alternatively the script can be installed from github. Run on the command line

python3 bh20sequploader/ example/sequence.fasta example/maximum_metadata_example.yaml

after installing dependencies (also described in INSTALL with the GNU Guix package manager). The --help shows

Entering sequence uploader
usage: [-h] [--validate] [--skip-qc] [--trusted] metadata sequence_p1 [sequence_p2]

Upload SARS-CoV-19 sequences for analysis

positional arguments:
  metadata     sequence metadata json
  sequence_p1  sequence FASTA/FASTQ
  sequence_p2  sequence FASTQ pair

optional arguments:
  -h, --help   show this help message and exit
  --validate   Dry run, validate only
  --skip-qc    Skip local qc check
  --trusted    Trust local validation and add directly to validated project

The web interface using this exact same script so it should just work (TM).

7.2 Example: uploading bulk GenBank sequences

At this point, most of PubSeq's FASTA files come from NCBI GenBank. This data is public (see the policy) and we provide it with metadata under a CC-BY-4.0 license.

We use multiple scripts to fetch, check and update data. Since there are dependencies involved we suggest to match our development work place and use a Guix environment to run the tools, see also

The scripts to pull data from GenBank are in workflows/pull-data/genbank. The scripts that query Pubseq are in workflows/pubseq.

7.2.1 List PubSeq IDs

The first script to run fetches a list of Pubseq IDs to make sure we don't download data already in PubSeq. It requires Ruby3 and nothing else as a dependency and takes a second to run:

ruby pubseq-fetch-ids > pubseq_ids.txt
head pubseq_ids.txt

showing all GenBank IDs stored in PubSeq.

7.2.2 Fetch GenBank IDs

In the next step we essentially follow the PubSeq GenBank README and fetch the GenBank IDs as a list with

python3 --skip pubseq_ids.txt > genbank_ids.txt

this list is what tells us to download from GenBank with the next script

7.2.3 Fetch GenBank XML

Here we fetch the XML files for all the IDs that are listed in genbank_ids.txt. This is a slow procedure!

# --- fetch XML
python3 --ids genbank_ids.txt --out ~/tmp/genbank

Sometimes the download stops. In that case you can restart the download with above command. It will only fetch the missing files. With the same genbank_ids.txt file that should work fine.

7.2.4 Transform and normalize data for uploading

Now we have the GenBank XML data we can start transforming and the metadata

# --- Basic transform to YAML/JSON and FASTA
python3 --out ~/tmp/pubseq [XML file(s)]

which also writes a file named 'state.json' in the output directory. This file contains all errors and warnings! For example we find

"MT665288": {
    "valid": false,
    "error": "Sequence too short for MT665288",
    "warnings": [
        "Missing host_species",
        "Missing collection_location",
        "Missing collection_date",
        "Missing host_species",
        "Missing specimen_source"

Ouch! Not only does it fail the sequence, it also fails on other metadata fields. This will mean this record gets dropped.

Note we split transformation from normalizing metadata. Mostly because transformation is related to the source of the data. In this case we transform GenBank XML to first stage JSON.

7.2.5 Normalize metadata

In the next stage we adjust and normalize the metadata so it can be transformed to RDF. This transformation checks data and transforms ambiguous statements into 'absolute' statements where possible. For example human and Homo sapiens mean the same thing and translate to the unambiguous URI

Note that, in addition to the earlier state.json file, which refers to the input YAML/JSON files, we pass in two optional comma separated transformation files which are simple mappings, e.g. for above

Homo sapiens,

Do note, however, that we are increasingly moving to handling such mappings in regex code.

To run the normalization use the command:

python3 -s ~/tmp/yamlfa/state.json --species ncbi_host_species.csv --specimen specimen.csv --validate

To work on one single JSON/YAML file/record, you can pass in the ID, e.g.

python3 -s ~/tmp/new-yamlfa-orig.2/state.json --validate MW084447

To write output specify the output dir:

python3 -s ~/tmp/new-yamlfa-orig.2/state.json --validate MW084447 --out ~/tmp/test

and run a diff with, for example,

colordiff ~/tmp/new-yamlfa-orig.2/MW084447.json ~/tmp/test/MW084447.json

or a more fancy version using the excellent jq tool:

colordiff <(jq -S . ~/tmp/new-yamlfa-orig.2/MW084447.json) <(jq -S . ~/tmp/test/MW084447.json )

which shows the transformation very clearly:

<     "host_species": "Homo sapiens"
>     "host_species": ""

7.2.6 Upload data

7.3 Example: preparing metadata from spreadsheets

Usually, metadata are available in a tabular format, such as spreadsheets (the GenBank data entry has that as a default). As an example, we provide a script to show you how to parse your metadata from a spreadsheet into YAML files, from a template, ready for PubSeq upload. To execute the script, go in the ~bh20-seq-resource/scripts/esr_samples and execute


You will find the YAML files in the `yaml` folder which will be created in the same directory.

In the example we use Python pandas to read the spreadsheet into a tabular structure. Next we use a template.yaml file that gets filled in by so we get a metadata YAML file for each sample.

Next run the earlier CLI uploader for each YAML and FASTA combination. It can't be much easier than this. For ESR we uploaded a batch of 600 sequences this way writing a few lines of Python code. See example.

Edit text!

Other documents

We fetch sequence data and metadata. We query the metadata in multiple ways using SPARQL and onthologies
We submit a sequence to the database. In this BLOG we fetch a sequence from GenBank and add it to the database.
We modify a workflow to get new output
We modify metadata for all to use! In this BLOG we add a field for a creative commons license.
Dealing with PubSeq localisation data
We explore the Arvados command line and API
Generate the files needed for uploading to EBI/ENA
Documentation for PubSeq REST API