Package org.gbif.api.model.crawler
Class CrawlJob
java.lang.Object
org.gbif.api.model.crawler.CrawlJob
This class represents a job to be worked on by a crawler. That can be either one of the XML based protocols
(BioCASe, DiGIR, TAPIR) or a DwC-Archive.
For now this object will be used in JSON serialized form in ZooKeeper.
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionboolean
int
Used to save protocol specific information (e.g.getProperty
(String name) int
hashCode()
toString()
-
Constructor Details
-
CrawlJob
public CrawlJob(UUID datasetKey, EndpointType endpointType, URI targetUrl, int attempt, @Nullable Map<String, String> properties) Creates a new crawl job.- Parameters:
datasetKey
- of the dataset to crawlendpointType
- of the datasettargetUrl
- of the datasetattempt
- a monotonously increasing counter, increased every time we try to crawl a dataset whether that attempt is successful or notproperties
- a way to provide protocol or crawl specific options
-
CrawlJob
Constructor with mandatory fields. Properties field is set to null.- Parameters:
datasetKey
- of the dataset to crawlendpointType
- of the datasettargetUrl
- of the datasetattempt
- a monotonously increasing counter, increased every time we try to crawl a dataset whether that attempt is successful or not
-
-
Method Details
-
getDatasetKey
-
getEndpointType
-
getProperties
Used to save protocol specific information (e.g. contentNamespace for TAPIR and BioCASe).- Returns:
- an immutable map of all the properties
-
getTargetUrl
-
getAttempt
-
getProperty
-
equals
-
hashCode
-
toString
-