一.仿寫流程
學(xué)習(xí)一個框架,第一步學(xué)習(xí)肯定是照著代碼看文檔。 既然要看代碼,就要看最權(quán)威的,這里我是代碼是參照https://github.com/android/camera-samples android給的官方示例,結(jié)合官方文檔https://developer.android.com/reference/android/hardware/camera2/package-summary來看,所以首先要先看一遍文檔,然后重寫一遍它里面最基礎(chǔ)的Camera2Basic。
文檔里面寫了啥
第一: android.hardware.camera2是在API level 21加上的,21以下的設(shè)備不能用,Camera2是為了替換deprecated的Camera類
第二:使用Camera2的步驟
1. 首先要獲取一個CameraManager對象,這個是所有操作的前提。我們可以通過這個對象來進(jìn)行枚舉Camera設(shè)備的操作,也可以通過CameraManager的getCameraCharacteristics(String)
方法獲取每個攝像機(jī)設(shè)備的信息,這些信息存放在CameraCharacteristics
類中。
2. 然后我們還可以通過CameraManager獲取每一個攝像頭對應(yīng)的CameraDevices對象,獲取的方法是調(diào)用CameraManager的openCamera
方法。每個CameraDevices都對應(yīng)一個Andorid的Camera設(shè)備。
3. 下一步需要創(chuàng)建一個session,session中包含了一系列已經(jīng)設(shè)置好寬高和格式的Surface,攝像機(jī)會通過這些寬高和格式來設(shè)置攝像頭的輸出。Surface可以從SurfaceView, SurfaceTexture , MediaCodec, MediaRecorder, Allocation, 和 ImageReader中來。 除此之外,創(chuàng)建session還需要構(gòu)建一個包含各種攝像頭參數(shù)和輸出Surface的CaptureRequest。 其實每次調(diào)用拍照都會把CaptureRequest送到相機(jī)設(shè)備,而相機(jī)設(shè)備也是根據(jù)這個CaptureRequest來進(jìn)行拍照,你可以在 CaptureRequest中設(shè)置白平衡,光圈大小,曝光時間等參數(shù)。
4. 最后根據(jù)需要,看是需要拍照(capture),還是攝像(repeating),拍照API的優(yōu)先級比攝像的優(yōu)先級高。如果調(diào)用多次,就把CaptureRequest送到相機(jī)設(shè)備中多次,調(diào)用capture就送一次。
文檔不如代碼看起來舒服,還是直接上代碼。
代碼結(jié)構(gòu)如下
├── app
│ ├── build.gradle
│ └── src
│ ├── main
│ │ ├── AndroidManifest.xml
│ │ ├── java
│ │ │ └── com
│ │ │ └── example
│ │ │ └── android
│ │ │ └── camera2
│ │ │ └── basic
│ │ │ ├── CameraActivity.kt
│ │ │ └── fragments
│ │ │ ├── CameraFragment.kt
│ │ │ ├── ImageViewerFragment.kt
│ │ │ ├── PermissionsFragment.kt
│ │ │ └── SelectorFragment.kt
│ │ └── res
│ │ ├── layout
│ │ │ ├── activity_camera.xml
│ │ │ └── fragment_camera.xml
│ │ ├── navigation
│ │ │ └── nav_graph.xml
└── utils
└── src
└── main
├── AndroidManifest.xml
├── java
│ └── com
│ └── example
│ └── android
│ └── camera
│ └── utils
│ ├── AutoFitSurfaceView.kt
│ ├── CameraSizes.kt
│ ├── ExifUtils.kt
│ ├── GenericListAdapter.kt
│ ├── OrientationLiveData.kt
│ ├── Yuv.kt
│ └── YuvToRgbConverter.kt
CameraActivity用的是nav_graph.xml來進(jìn)行導(dǎo)航,CameraActivity本身并不含任何的邏輯和UI,基本上所有的邏輯代碼都是在各個Fragment中寫的。
首先是PermissionFramgment,這個Fragment沒有什么,就是請求Camera權(quán)限,請求完權(quán)限后就會導(dǎo)航到SelectorFragment。
SelectorFragment
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
view as RecyclerView
view.apply {
layoutManager = LinearLayoutManager(requireContext())
//這里就是文檔中提到的第一步,先獲取CameraManager對象
val cameraManager =
requireContext().getSystemService(Context.CAMERA_SERVICE) as CameraManager
// 調(diào)用enumerateCameras方法,獲取一個List<FormatItem>
val cameraList = enumerateCameras(cameraManager)
// 下面就是初始化RecyclerView中的一部分
val layoutId = android.R.layout.simple_list_item_1
adapter = GenericListAdapter(cameraList, itemLayoutId = layoutId) { view, item, _ ->
view.findViewById<TextView>(android.R.id.text1).text = item.title
view.setOnClickListener {
Navigation.findNavController(requireActivity(), R.id.fragment_container)
//每個item點擊之后,都會導(dǎo)航到CameraFragment,參數(shù)是cameraId和格式format。
.navigate(SelectorFragmentDirections.actionSelectorToCamera(
item.cameraId, item.format))
}
}
}
}
上面來看,最重要的就是調(diào)用enumerateCameras來獲取List,那么就看看enumerateCameras方法做了什么。
private fun enumerateCameras(cameraManager: CameraManager): List<FormatItem> {
val availableCameras: MutableList<FormatItem> = mutableListOf()
// Get list of all compatible cameras
// 篩選符合要求的cameraId
val cameraIds = cameraManager.cameraIdList.filter {
// 獲取每個Camera的設(shè)備信息
val characteristics = cameraManager.getCameraCharacteristics(it)
// 看看這個設(shè)備是否滿足Camera的最小功能集(深度攝像機(jī)不一定包含在里面)
val capabilities = characteristics.get(
CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)
capabilities?.contains(
CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_BACKWARD_COMPATIBLE) ?: false
}
// Iterate over the list of cameras and return all the compatible ones
// 下面就是把每個camera和其支持的format都分別加入到List<FormatItem>中,這里判斷支持的格式就只有
cameraIds.forEach { id ->
val characteristics = cameraManager.getCameraCharacteristics(id)
val orientation = lensOrientationString(
characteristics.get(CameraCharacteristics.LENS_FACING)!!)
// Query the available capabilities and output formats
val capabilities = characteristics.get(
CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!
val outputFormats = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!.outputFormats
// All cameras *must* support JPEG output so we don't need to check characteristics
// 每個攝像機(jī)都支持JPEG輸出(壓縮后的格式)
availableCameras.add(FormatItem(
"$orientation JPEG ($id)", id, ImageFormat.JPEG))
// Return cameras that support RAW capability
// 是否支持RAW(原始畫質(zhì)), 使用RAW_SENSOR的話,每種相機(jī)都不同
if (capabilities.contains(
CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_RAW) &&
outputFormats.contains(ImageFormat.RAW_SENSOR)) {
availableCameras.add(FormatItem(
"$orientation RAW ($id)", id, ImageFormat.RAW_SENSOR))
}
// Return cameras that support JPEG DEPTH capability
// 深度相機(jī),基本用不到
if (capabilities.contains(
CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT) &&
outputFormats.contains(ImageFormat.DEPTH_JPEG)) {
availableCameras.add(FormatItem(
"$orientation DEPTH ($id)", id, ImageFormat.DEPTH_JPEG))
}
}
return availableCameras
}
所以SelectorFragment中基本就是獲取cameraId和format,然后交給CameraFragment來使用,那么看來,最重要的代碼還是在CameraFragment中實現(xiàn)的,所以接下來看看CameraFragment是怎么實現(xiàn)的。
fragmentCameraBinding.viewFinder.holder.addCallback(object : SurfaceHolder.Callback {
override fun surfaceDestroyed(holder: SurfaceHolder) = Unit
override fun surfaceChanged(
holder: SurfaceHolder,
format: Int,
width: Int,
height: Int) = Unit
// 這個時候Surface已經(jīng)創(chuàng)建了
override fun surfaceCreated(holder: SurfaceHolder) {
// Selects appropriate preview size and configures view finder
//獲取最優(yōu)的預(yù)覽大小,小于1080p和屏幕尺寸下最大的攝像頭拍攝分辨率
//這里就是從characteristics.getCameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputSizes(SurfaceHolder::class.java)中獲取Size數(shù)組,這個數(shù)據(jù)就是攝像頭的輸出分辨率
val previewSize = getPreviewOutputSize(
fragmentCameraBinding.viewFinder.display,
characteristics,
SurfaceHolder::class.java
)
Log.d(TAG, "View finder size: ${fragmentCameraBinding.viewFinder.width} x ${fragmentCameraBinding.viewFinder.height}")
Log.d(TAG, "Selected preview size: $previewSize")
// 根據(jù)要得到的攝像頭分辨率設(shè)置 surface分辨率
// 這個surface要用來創(chuàng)建session,就是上面的第三步,然后攝像機(jī)就可以根據(jù)這個surface大小來選擇拍照的分辨率。
// 這樣就能保證預(yù)覽的寬高比,與攝像頭一致,就可以保證預(yù)覽沒有進(jìn)行拉伸
fragmentCameraBinding.viewFinder.setAspectRatio(
previewSize.width,
previewSize.height
)
// To ensure that size is set, initialize camera in the view's thread
view.post { initializeCamera() }
}
})
其實上面最關(guān)鍵的就是計算出想要的攝像頭分辨率,然后把對應(yīng)的寬高比傳給View,View在進(jìn)行調(diào)整,調(diào)整到和攝像頭輸出的寬高比一致,這樣就不會造成預(yù)覽畫面彎曲。計算的過程需要得到對應(yīng)CameraDevice的OutputSize數(shù)組。
然后通過View調(diào)用initializeCamera,初始化攝像頭,這樣就能保證Surface的寬高比已經(jīng)改變了,攝像頭也會根據(jù)這個Surface來初始化攝像頭的分辨率。
initializeCamera方法是整個Demo中最關(guān)鍵的部分
private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
// Open the selected camera
// 這里面實現(xiàn)也簡單,就是我們在文檔第二步做所,用cameraManager根據(jù)cameraId來創(chuàng)建CameraDevice
camera = openCamera(cameraManager, args.cameraId, cameraHandler)
// Initialize an image reader which will be used to capture still photos
//設(shè)置輸出的格式
val size = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
.getOutputSizes(args.pixelFormat).maxByOrNull { it.height * it.width }!!
imageReader = ImageReader.newInstance(
size.width, size.height, args.pixelFormat, IMAGE_BUFFER_SIZE)
// Creates list of Surfaces where the camera will output frames
// 設(shè)置到Request中輸出的Surface,第三步中所說
val targets = listOf(fragmentCameraBinding.viewFinder.holder.surface, imageReader.surface)
// Start a capture session using our open camera and list of Surfaces where frames will go
// 創(chuàng)建Session,第三步中所說
// createCaptureSession中也簡單,就是調(diào)用CameraDevice對應(yīng)的createCaptureSession方法
session = createCaptureSession(camera, targets, cameraHandler)
val captureRequest = camera.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW).apply { addTarget(fragmentCameraBinding.viewFinder.holder.surface) }
// This will keep sending the capture request as frequently as possible until the
// session is torn down or session.stopRepeating() is called
// 這里就是第四步,調(diào)用setRepeatingRequest獲取一個持續(xù)的圖片流
session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)
// Listen to the capture button
fragmentCameraBinding.captureButton.setOnClickListener {
// Disable click listener to prevent multiple requests simultaneously in flight
it.isEnabled = false
// Perform I/O heavy operations in a different scope
lifecycleScope.launch(Dispatchers.IO) {
// takePhoto方法是單獨拍照
takePhoto().use { result ->
Log.d(TAG, "Result received: $result")
// Save the result to disk
val output = saveResult(result)
Log.d(TAG, "Image saved: ${output.absolutePath}")
// If the result is a JPEG file, update EXIF metadata with orientation info
if (output.extension == "jpg") {
val exif = ExifInterface(output.absolutePath)
exif.setAttribute(
ExifInterface.TAG_ORIENTATION, result.orientation.toString())
exif.saveAttributes()
Log.d(TAG, "EXIF metadata saved: ${output.absolutePath}")
}
// Display the photo taken to user
lifecycleScope.launch(Dispatchers.Main) {
navController.navigate(CameraFragmentDirections
.actionCameraToJpegViewer(output.absolutePath)
.setOrientation(result.orientation)
.setDepth(Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q &&
result.format == ImageFormat.DEPTH_JPEG))
}
}
// Re-enable click listener after photo is taken
it.post { it.isEnabled = true }
}
}
}
上面的流程就走完了調(diào)用一個攝像頭的全部流程,主要就是先根據(jù)CameraId,調(diào)用CameraManager中的方法創(chuàng)建CameraDevice對象,然后根據(jù)這個CameraDevice對象,創(chuàng)建一個CameraCaptureSession對象,用這個對象調(diào)用setRepeatingRequest方法。
除此之外,還有一個takePhoto方法,這個方法就是在預(yù)覽攝像頭的同時進(jìn)行拍照用的,看一下這個方法的具體實現(xiàn)。
private suspend fun takePhoto():
CombinedCaptureResult = suspendCoroutine { cont ->
// Flush any images left in the image reader
@Suppress("ControlFlowWithEmptyBody")
while (imageReader.acquireNextImage() != null) {
}
// Start a new image queue
val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
imageReader.setOnImageAvailableListener({ reader ->
val image = reader.acquireNextImage()
Log.d(TAG, "Image available in queue: ${image.timestamp}")
imageQueue.add(image)
}, imageReaderHandler)
// TEMPLATE_STILL_CAPTURE會優(yōu)先保證拍出來的畫面質(zhì)量,而不是幀數(shù)
// 這個時候預(yù)覽的Surface會有一個白色動畫,所以其實也沒必要保證幀數(shù)
val captureRequest = session.device.createCaptureRequest(
CameraDevice.TEMPLATE_STILL_CAPTURE).apply { addTarget(imageReader.surface) }
session.capture(captureRequest.build(), object : CameraCaptureSession.CaptureCallback() {
override fun onCaptureStarted(
session: CameraCaptureSession,
request: CaptureRequest,
timestamp: Long,
frameNumber: Long) {
super.onCaptureStarted(session, request, timestamp, frameNumber)
fragmentCameraBinding.viewFinder.post(animationTask)
}
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult) {
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
Log.d(TAG, "Capture result received: $resultTimestamp")
// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable { cont.resumeWithException(exc) }
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
// Loop in the coroutine's context until an image with matching timestamp comes
// We need to launch the coroutine context again because the callback is done in
// the handler provided to the `capture` method, not in our coroutine context
@Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(cont.context) {
while (true) {
// Dequeue images while timestamps don't match
val image = imageQueue.take()
// TODO(owahltinez): b/142011420
// if (image.timestamp != resultTimestamp) continue
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q &&
image.format != ImageFormat.DEPTH_JPEG &&
image.timestamp != resultTimestamp) continue
Log.d(TAG, "Matching image dequeued: ${image.timestamp}")
// Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
imageReader.setOnImageAvailableListener(null, null)
// Clear the queue of images, if there are left
while (imageQueue.size > 0) {
imageQueue.take().close()
}
// Compute EXIF orientation metadata
val rotation = relativeOrientation.value ?: 0
val mirrored = characteristics.get(CameraCharacteristics.LENS_FACING) ==
CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)
// Build the result and resume progress
cont.resume(CombinedCaptureResult(
image, result, exifOrientation, imageReader.imageFormat))
// There is no need to break out of the loop, this coroutine will suspend
}
}
}
}, cameraHandler)
}
其他的就不是很關(guān)鍵,從這個庫我們能學(xué)到攝像機(jī)的基本應(yīng)用流程。
根據(jù)上面這個庫,自己搞一下整個攝像機(jī)的流程,如下文章來源:http://www.zghlxwxcb.cn/news/detail-437971.html
private void initCameraParam() {
//檢測相機(jī)權(quán)限
if (!checkPermission())
return;
//獲取CameraManager對象
mCameraManager = (CameraManager) (getSystemService(Context.CAMERA_SERVICE));
if (mCameraManager == null)
return;
try {
mData = new ArrayList<>();
//這里我們直接使用了第一個攝像機(jī)設(shè)備ID
CameraCharacteristics cc = mCameraManager.getCameraCharacteristics(mCameraManager.getCameraIdList()[0]);
// 這個View繼承SurfaceView,可以根據(jù)傳入的分辨率自己調(diào)整View的大小,除此之外和SurfaceView一樣
AutoFitSurfaceView autoView = findViewById(R.id.camera_surface);
autoView.getHolder().addCallback(new SurfaceHolder.Callback() {
//需要等到Surface創(chuàng)建后加入到
@Override
public void surfaceCreated(@NonNull SurfaceHolder holder) {
//下面這一段就是根據(jù)View的初步大小和Camera設(shè)備能輸出的大小進(jìn)行匹配,找到一個最合適的
Display display = autoView.getDisplay();
Point point = new Point();
display.getRealSize(point);
SmartSize renderSize = new SmartSize(point.x, point.y);
SmartSize maxSize = new SmartSize(1920, 1080);
if (renderSize.getLongSize() <= maxSize.getLongSize() && renderSize.getShortSize() <= maxSize.getShortSize())
maxSize = renderSize;
StreamConfigurationMap config = cc.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
Size[] sizes = config.getOutputSizes(SurfaceHolder.class);
SmartSize resultSize = new SmartSize(0, 0);
for (Size size: sizes) {
int longSize = Math.max(size.getWidth(), size.getHeight());
int shortSize = Math.min(size.getWidth(), size.getHeight());
if (resultSize.getLongSize() <= longSize && longSize <= maxSize.getLongSize() &&
resultSize.getShortSize() <= shortSize && shortSize <= maxSize.getShortSize()) {
resultSize = new SmartSize(longSize, shortSize);
}
}
if (resultSize.getShortSize() != 0 && resultSize.getLongSize() != 0)
autoView.setAspectRatio(resultSize.getLongSize(), resultSize.getShortSize());
// 通過view.post 保證Surface的大小已經(jīng)確定修改
autoView.post(() -> {
try {
if (!checkPermission())
return;
//打開攝像機(jī) 還是默認(rèn)使用第一個設(shè)備ID
mCameraManager.openCamera(mCameraManager.getCameraIdList()[0], new CameraDevice.StateCallback() {
@Override
public void onOpened(@NonNull CameraDevice camera) {
mCameraDevice = camera;
//打開之后創(chuàng)建Session和Request
OutputConfiguration outputConfiguration = new OutputConfiguration(autoView.getHolder().getSurface());
CameraCaptureSession.StateCallback callback = new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession session) {
//Session創(chuàng)建成功的回調(diào),在這個回調(diào)中創(chuàng)建Request,然后調(diào)用setRepeatingRequest獲取圖像流
mCameraCaptureSession = session;
try {
CaptureRequest.Builder builder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
builder.addTarget(autoView.getHolder().getSurface());
mCameraCaptureSession.setRepeatingRequest(builder.build(), new CameraCaptureSession.CaptureCallback() {
@Override
public void onCaptureStarted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, long timestamp, long frameNumber) {
super.onCaptureStarted(session, request, timestamp, frameNumber);
}
@Override
public void onCaptureProgressed(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull CaptureResult partialResult) {
super.onCaptureProgressed(session, request, partialResult);
}
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
}
@Override
public void onCaptureFailed(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull CaptureFailure failure) {
super.onCaptureFailed(session, request, failure);
}
@Override
public void onCaptureSequenceCompleted(@NonNull CameraCaptureSession session, int sequenceId, long frameNumber) {
super.onCaptureSequenceCompleted(session, sequenceId, frameNumber);
}
@Override
public void onCaptureSequenceAborted(@NonNull CameraCaptureSession session, int sequenceId) {
super.onCaptureSequenceAborted(session, sequenceId);
}
@Override
public void onCaptureBufferLost(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull Surface target, long frameNumber) {
super.onCaptureBufferLost(session, request, target, frameNumber);
}
}, mCameraHandler);
} catch (Exception e) {
e.printStackTrace();
}
}
@Override
public void onConfigureFailed(@NonNull CameraCaptureSession session) {
Log.e(TAG, "onConfigureFailed: ");
}
};
//這里是主動創(chuàng)建Session的地方
try {
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.P) {
SessionConfiguration configuration = new SessionConfiguration(SessionConfiguration.SESSION_REGULAR,
Collections.singletonList(outputConfiguration),
new HandlerExecutor(mCameraHandler),
callback);
mCameraDevice.createCaptureSession(configuration);
} else {
mCameraDevice.createCaptureSession(Collections.singletonList(autoView.getHolder().getSurface()),
callback, mCameraHandler);
}
} catch (Exception e) {
e.printStackTrace();
}
}
@Override
public void onDisconnected(@NonNull CameraDevice camera) {
Log.e(TAG, "onDisconnected: ");
}
@Override
public void onError(@NonNull CameraDevice camera, int error) {
String msg;
switch (error) {
case ERROR_CAMERA_DEVICE:
msg = "Fatal (device)";
break;
case ERROR_CAMERA_DISABLED:
msg = "Device policy";
break;
case ERROR_CAMERA_IN_USE:
msg = "Camera in use";
break;
case ERROR_CAMERA_SERVICE:
msg = "Fatal (service)";
break;
case ERROR_MAX_CAMERAS_IN_USE:
msg = "Maximum cameras in use";
break;
default:
msg = "Unknown";
break;
}
Log.e(TAG, "onError: " + msg);
}
}, mCameraHandler);
} catch (Exception e) {
e.printStackTrace();
}
});
}
@Override
public void surfaceChanged(@NonNull SurfaceHolder holder, int format, int width, int height) {
}
@Override
public void surfaceDestroyed(@NonNull SurfaceHolder holder) {
}
});
} catch (Exception e) {
e.printStackTrace();
}
}
以上,就基本根據(jù)一個Demo寫了一個攝像頭從創(chuàng)建到預(yù)覽的全部過程,復(fù)寫代碼是為了更好的掌握結(jié)構(gòu),所以只是注重了流程,接下來會詳細(xì)學(xué)習(xí)攝像頭的一些參數(shù)。文章來源地址http://www.zghlxwxcb.cn/news/detail-437971.html
到了這里,關(guān)于學(xué)習(xí)筆記 -- 從零開始學(xué)習(xí)Android Camera2 -- (1)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!