![]() Note the storepass argument expects you to create your own password. Option 1: Use a keystore generate a keystore with Java’s keytool. Supports both PKCS8 and OpenSSL private keys.Ĭertificate_file: the X509 certificate chain file location. Private_key_file: the private key file location. The password (if applicable) MUST be the same as keystore password. If multiple private key entries exist in the keystore, the first one will be used. TorchServe supports two ways to configure SSL: You must also provide a certificate and private key to enable SSL. The default is port 443, but you can make TorchServe listen on whatever port you set to accept https requests.įor example, to receive https traffic on port 8443, you would use: inference_address=. To enable HTTPs, you can change inference_address, management_address or metrics_address protocol from http to https. Grpc_management_port: management gRPC API binding port. Grpc_inference_port: Inference gRPC API binding port. To configure different ports use following properties The inference gRPC API is listening on port 7070, and the management gRPC API is listening on port 7071 by default. Configure TorchServe gRPC listening ports ¶ # bind inference API to private network interfaces inference_address = 5.3.4. To run predictions on models on a specific IP address, specify the IP address and port. To run predictions on models on a public IP address, specify the IP address as 0.0.0.0. Metrics_address: Metrics API binding address. Management_address: Management API binding address. Inference_address: Inference API binding address. The management API is listening on port 8081. The inference API is listening on port 8080. To avoid unauthorized access, TorchServe only allows localhost access by default. TorchServe doesn’t support authentication natively. Configure TorchServe listening address and port ¶ Note: model_store and load_models properties are overridden by command line parameters, if specified. Pathname: The model store location is specified by the value of pathname. Standalone: default: N/A, Loading models from the local disk is disabled. Model1=model1.mar, model2=model2.mar: Load models with the specified names and MAR files from model_store. Model1.mar, model2.mar: Load models in the specified MAR files from model_store. Standalone: default: N/A, No models are loaded at start up.Īll: Load all models present in model_store. You can configure TorchServe to load models during startup by setting the model_store and load_models properties. To control TorchServe frontend memory footprint, configure the vmargs property in the config.properties fileĪdjust JVM options to fit your memory requirement. If none of the above is specified, TorchServe loads a built-in configuration with default values. If there is a config.properties in the folder where you call torchserve, TorchServe loads the config.properties file from the current working directory. If -ts-config parameter is passed to torchserve, TorchServe loads the configuration from the path specified by the parameter. If the TS_CONFIG_FILE environment variable is set, TorchServe loads the configuration from the path specified by the environment variable. ![]() TorchServe uses following, in order of priority, to locate this config.properties file: TorchServe uses a config.properties file to store configurations. If this option isĭisabled, TorchServe runs in the backgroundįor more detailed information about torchserve command line options, see Serve Models with TorchServe. –foreground Runs TorchServe in the foreground. –log-config Overrides the default log4j2.xml –models Overrides the load_models property in config.properties –model-store Overrides the model_store property in config.properties file –ts-config TorchServe loads the specified configuration file if TS_CONFIG_FILE environment variable is not set Command line parameters ¶Ĭustomize TorchServe behavior by using the following command line arguments when you call torchserve: The value of an environment variable overrides other property values. Note: Environment variables have higher priority than command line or config.properties. You can change TorchServe behavior by setting the following environment variables: The value of a command line argument overridesĪ value in the configuration file. In order of priority, they are:įor example, the value of an environment variable overrides both command line arguments andĪ property in the configuration file. There are three ways to configure TorchServe. However, if you want to customize TorchServe, the configuration options described in this topic are available. The default settings form TorchServe should be sufficient for most use cases.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |