본문 바로가기
AWS

aws machine learning modeling problems

by 캔자 2023. 2. 2.
  1. What is the main difference between supervised and unsupervised learning algorithms in SageMaker?
    a. Supervised learning algorithms require labeled data, while unsupervised learning algorithms do not.
    b. Unsupervised learning algorithms require labeled data, while supervised learning algorithms do not.
    c. Supervised learning algorithms are more accurate than unsupervised learning algorithms.
    d. Unsupervised learning algorithms are more accurate than supervised learning algorithms.

  2. What is the purpose of early stopping in deep learning models?
    a. To improve the accuracy of the model by adjusting weights and biases in each epoch.
    b. To stop training when the model starts to overfit to the training data.
    c. To reduce the number of parameters in the model to make it more efficient.
    d. To ensure that the model has enough capacity to learn complex relationships in the data.

  3. Which of the following is NOT a commonly used loss function in deep learning?
    a. Mean Squared Error (MSE)
    b. Cross-Entropy Loss
    c. Structural Similarity Index (SSIM)
    d. Categorical Cross-Entropy Loss

  4. What is the purpose of hyperparameter tuning in machine learning models?
    a. To optimize the performance of the model by selecting the best values for parameters such as learning rate, batch size, and number of hidden layers.
    b. To avoid overfitting by reducing the number of parameters in the model.
    c. To increase the accuracy of the model by adjusting weights and biases in each epoch.
    d. To reduce the complexity of the model by using a smaller training data set.

  5. What is the difference between batch normalization and layer normalization in deep learning models?
    a. Batch normalization normalizes the activations within each mini-batch, while layer normalization normalizes the activations across the entire layer.
    b. Layer normalization normalizes the activations within each mini-batch, while batch normalization normalizes the activations across the entire layer.
    c. Batch normalization is used in convolutional neural networks, while layer normalization is used in recurrent neural networks.
    d. Layer normalization is used in convolutional neural networks, while batch normalization is used in recurrent neural networks.

  6. What is the main benefit of using transfer learning in deep learning models?
    a. To avoid overfitting by reducing the number of parameters in the model.
    b. To reduce the time and cost of training by leveraging pre-trained models on large data sets.
    c. To increase the accuracy of the model by adjusting weights and biases in each epoch.
    d. To improve the performance of the model by selecting the best values for hyperparameters.

  7. What is the purpose of data augmentation in deep learning models?
    a. To reduce the size of the training data set to make the model more efficient.
    b. To avoid overfitting by generating additional training data based on existing examples.
    c. To reduce the number of parameters in the model to make it more efficient.
    d. To increase the accuracy of the model by adjusting weights and biases in each epoch.

  8. What is the purpose of early stopping in hyperparameter tuning?
    a. To stop the tuning process when the model starts to overfit to the validation data.
    b. To reduce the time and cost of tuning by using a smaller validation data set.
    c. To increase the accuracy of the model by adjusting weights and biases in each epoch.
    d. To avoid

  9. Which of the following is NOT a common regularization technique used in deep learning models?
    a. Dropout
    b. L1 Regularization
    c. L2 Regularization
    d. Data Augmentation

  10. What is the main difference between a deep neural network and a shallow neural network?
    a. A deep neural network has more layers than a shallow neural network.
    b. A shallow neural network has more layers than a deep neural network.
    c. A deep neural network is more complex than a shallow neural network.
    d. A shallow neural network is more complex than a deep neural network.
 
 
 

Answers,

  1. a. Supervised learning algorithms require labeled data, while unsupervised learning algorithms do not.
  2. b. To stop training when the model starts to overfit to the training data.
  3. c. Structural Similarity Index (SSIM)
  4. a. To optimize the performance of the model by selecting the best values for parameters such as learning rate, batch size, and number of hidden layers.
  5. a. Batch normalization normalizes the activations within each mini-batch, while layer normalization normalizes the activations across the entire layer.
  6. b. To reduce the time and cost of training by leveraging pre-trained models on large data sets.
  7. b. To avoid overfitting by generating additional training data based on existing examples.
  8. a. To stop the tuning process when the model starts to overfit to the validation data.
  9. d. Data Augmentation
  10. a. A deep neural network has more layers than a shallow neural network.
 
 

댓글