The parameter C or box constraint that trades of how much misclassification of isolated points modifies the decision boundary is defined as 'BoxConstraint'. This parameter needs to be specified as a numeric value.
For the parameter Gamma and after reading a little bit on the Scikit-learn help I am not sure if you refer to the 'KernelScale' parameter in Matlab or 'Weights' but I am going to go with this last one. The parameter 'Weights' defines how each observation influences the shape of the decision boundary and can be specified as one of the columns of the table where the data is provided or simply a numeric vector with as many elements as the input training samples.
I am used to the 2015b version where brute force or self-created code are the main ways to optimise these hyperparameters. However, I believe from 2016b they introduced hyperparameter optimisation routines that I believe use Gaussian Processes for this purpose.
Attached is an example on how I do brute force search for the box constraint and others.
Please see this page and in particular read the last example for the optimisation I am referring to.
Yes absolutely and please feedback if the hyperparameter optimisation helped reducing the training time. Also note training can be time consuming when you ask the algorithm to search for separability on a set where the VC dimension is higher than the number of features in your array or separability is just not possible. Here is where the mere fact of defining a BoxConstraint, making your SVM a soft border algorithm, should considerably reduce your training time!